Skip to content

fix: download directories with http fs #1893

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions fsspec/implementations/http.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import asyncio
import io
import logging
import os
import re
import weakref
from copy import copy
from urllib.parse import urlparse
from urllib.parse import unquote, urlparse

import aiohttp
import yarl
Expand Down Expand Up @@ -156,7 +157,7 @@ async def _ls_real(self, url, detail=True, **kwargs):
kw.update(kwargs)
logger.debug(url)
session = await self.set_session()
async with session.get(self.encode_url(url), **self.kwargs) as r:
async with session.get(self.encode_url(url), **kw) as r:
self._raise_not_found_for_status(r, url)

if "Content-Type" in r.headers:
Expand Down Expand Up @@ -253,6 +254,9 @@ async def _get_file(
kw.update(kwargs)
logger.debug(rpath)
session = await self.set_session()
if await self._isdir(rpath):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will result in essentially every file getting downloaded twice, once to check the directory. This information should already be available in _get(), so we can skip there rather than here. It doesn't really make sense to call _get_file() on a path that is not considered a file.

Copy link
Author

@frostming frostming Jul 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to exclude dirs from the return result of self._expand_path but it will always add the root to the result, if we do not check is_dir in get_file, it will also fail there.

I think I might need to change more things at this point, which is a bit risky for a first contribution. What is your suggestion?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, I haven't had a chance to come up with something. I hope to get a little time at the start of next week.

os.makedirs(unquote(lpath), exist_ok=True)
return
async with session.get(self.encode_url(rpath), **kw) as r:
try:
size = int(r.headers["content-length"])
Expand All @@ -264,7 +268,7 @@ async def _get_file(
if isfilelike(lpath):
outfile = lpath
else:
outfile = open(lpath, "wb") # noqa: ASYNC230
outfile = open(unquote(lpath), "wb") # noqa: ASYNC230

try:
chunk = True
Expand Down
10 changes: 10 additions & 0 deletions fsspec/implementations/tests/test_http.py
Original file line number Diff line number Diff line change
Expand Up @@ -319,6 +319,16 @@ def test_download(server, tmpdir):
assert open(fn, "rb").read() == data


def test_download_dir(server, tmpdir):
h = fsspec.filesystem("http", headers={"give_length": "true", "head_ok": "true "})
url = server.address + "/index/"
fn = os.path.join(tmpdir, "adir")
h.get(url, fn, recursive=True)
assert os.path.exists(fn)
assert os.path.exists(os.path.join(fn, "realfile"))
assert open(os.path.join(fn, "realfile"), "rb").read() == data


def test_multi_download(server, tmpdir):
h = fsspec.filesystem("http", headers={"give_length": "true", "head_ok": "true "})
urla = server.realfile
Expand Down
Loading