Metadata for the ArchiveTeam Docker Hub repositories
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

32 lines
3.4 KiB

  1. {
  2. "user": "warcforceone",
  3. "name": "snscrape",
  4. "namespace": "warcforceone",
  5. "repository_type": "image",
  6. "status": 1,
  7. "status_description": "active",
  8. "description": "",
  9. "is_private": false,
  10. "is_automated": false,
  11. "star_count": 0,
  12. "pull_count": 40,
  13. "last_updated": "2020-06-03T01:31:31.271076Z",
  14. "date_registered": "2019-07-29T14:38:45.408798Z",
  15. "collaborator_count": 0,
  16. "affiliation": null,
  17. "hub_user": "warcforceone",
  18. "has_starred": false,
  19. "full_description": "# snscrape\nsnscrape is a scraper for social networking services (SNS). It scrapes things like user profiles, hashtags, or searches and returns the discovered items, e.g. the relevant posts. \n\nThe following services are currently supported:\n* Facebook: user profiles and groups\n* Gab: user profile posts, media, and comments\n* Google+: user profiles\n* Instagram: user profiles, hashtags, and locations\n* Twitter: user profiles, hashtags, searches, threads, and lists (members as well as posts)\n* VKontakte: user profiles\n\n## Requirements\nsnscrape requires Python 3.6 or higher. The Python package dependencies are installed automatically when you install snscrape.\n\nNote that one of the dependencies, lxml, also requires libxml2 and libxslt to be installed.\n\n## Installation\n pip3 install snscrape\n\nIf you want to use the development version:\n\n pip3 install git+https://github.com/JustAnotherArchivist/snscrape.git\n\n## Usage\nTo get all tweets by Jason Scott (@textfiles):\n\n snscrape twitter-user textfiles\n\nIt's usually useful to redirect the output to a file for further processing, e.g. in bash using the filename `@textfiles-tweets`:\n```bash\nsnscrape twitter-user textfiles >twitter-@textfiles\n```\n\nTo get the latest 100 tweets with the hashtag #archiveteam:\n\n snscrape --max-results 100 twitter-hashtag archiveteam\n\n`snscrape --help` or `snscrape <module> --help` provides details on the available options. `snscrape --help` also lists all available modules.\n\nIt is also possible to use snscrape as a library in Python, but this is currently undocumented.\n\n## Issue reporting\nIf you discover an issue with snscrape, please report it at <https://github.com/JustAnotherArchivist/snscrape/issues>. If possible please run snscrape with `-vv` and `--dump-locals` and include the log output as well as the dump files referenced in the log in the issue. Note that the files may contain sensitive information in some cases and could potentially be used to identify you (e.g. if the service includes your IP address in its response). If you prefer to arrange a file transfer privately, just mention that in the issue.\n\n## License\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.\n",
  20. "permissions": {
  21. "read": true,
  22. "write": false,
  23. "admin": false
  24. },
  25. "media_types": [
  26. "application/vnd.docker.container.image.v1+json"
  27. ],
  28. "content_types": [
  29. "image"
  30. ],
  31. "categories": []
  32. }