Commit Graph

13 Commits

Author SHA1 Message Date
Fake-Name 02e9831954 Better threading things. 2019-02-03 20:12:01 -08:00
Fake-Name f44b548be1 Misc updates. 2019-02-03 05:31:48 -08:00
Fake-Name d7cb062f5c Misc fixes. 2018-04-22 21:53:20 -07:00
Fake-Name e60ecff007 Fixing stuff.
Apparently I somehow completely fucked up the xbooru fetcher. Wat.
2018-04-22 20:49:13 -07:00
Fake-Name fd41dbbd4c Add image metadata to the file table, and fetch more then one db row per query, because the get_job() function query was somehow completely slamming my database. 2017-11-25 00:01:31 -08:00
Fake-Name bd2abec2fc More fixes. 2017-11-23 20:58:24 -08:00
Fake-Name f9fad86d54 Ok, most of the scrapers are go.
Whoooo!
2017-11-23 20:16:10 -08:00
Fake-Name 6b5daa3c4a Add sites as suggested in https://github.com/fake-name/DanbooruScraper/issues/2 2017-11-22 23:27:17 -08:00
Fake-Name 0570d20d7e Increment maximum submission numbers.
I need to write something that extracts that automatically.
2017-11-22 23:12:38 -08:00
Fake-Name 7177817531 I think this should bring everything up to functional. 2017-11-22 23:08:27 -08:00
Fake-Name eea2949abf More work, most of the execution structure is implemented.
Now, I have to update the plugins and make sure they're still OK.
2017-11-21 22:30:55 -08:00
Fake-Name 7061a147de Restructure mostly done, it at least starts running.
I need to rework the db stuff a LOT though.
2017-11-20 22:57:18 -08:00
Fake-Name b36af77670 Move things about, update webrequest lib. 2017-11-20 21:02:57 -08:00