Compare commits

..

151 commits

Author SHA1 Message Date
a38b021427 revert a99eaf06b8
revert Allow to bite users

Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
Signed-off-by: limepotato <limepot@protonmail.ch>
2024-09-02 17:40:41 -06:00
marcin mikołajczak
a99eaf06b8 Allow to bite users
Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
Signed-off-by: limepotato <limepot@protonmail.ch>
2024-09-02 17:37:39 -06:00
floatingghost
3bb31117e6 Merge pull request 'Handle domain mutes on the backend' (#804) from domain-mute-backend-processing into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/804
2024-08-20 10:32:47 +00:00
Floatingghost
2c5c531c35 readd comment about domain mutes 2024-08-20 11:05:36 +01:00
floatingghost
3ff0f46b9f Merge pull request 'Docs: Improve backup restore + fix warnings' (#554) from ilja/akkoma:docs_db_create_in_separate_commands into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/554
2024-06-25 21:33:42 +00:00
floatingghost
4f0cb61782 Merge pull request 'Move prune changelog entries to correct version' (#808) from norm/akkoma:prune-changelog into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/808
2024-06-23 02:20:36 +00:00
floatingghost
5fdb5d69d2 Merge pull request 'Update Caddyfile' (#809) from norm/akkoma:caddyfile-update into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/809
2024-06-23 02:20:24 +00:00
floatingghost
f66135ed08 Merge pull request 'Avoid accumulation of stale data in websockets' (#806) from Oneric/akkoma:websocket_fullsweep into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/806
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
2024-06-23 02:19:36 +00:00
floatingghost
dc34328f15 Merge pull request 'Fix elixir 1.17 and migration lock warnings' (#810) from Oneric/akkoma:ex1.17-warnings into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/810
2024-06-23 02:18:41 +00:00
Oneric
13e2a811ec Avoid accumulation of stale data in websockets
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.

Apparently long-lived Phoenix websocket processes are known to
often cause exactly this by getting into a state unfavourable
for the garbage collector.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.

Testing on one affected instance appears to confirm this theory

Ref.:
  https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226
  https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafter
  https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060

Tested-by: bjo
2024-06-22 22:22:33 +02:00
Oneric
1a4238bf98 cosmetic: fix concurrent index creation warnings
Since those old migrations will now most likely only run during db init,
there’s not much point in running them in the background concurrently
anyway, so just drop the cncurrent setting rather than disabling
migration locks.
2024-06-19 02:25:23 +02:00
Oneric
c3069b9478 cosmetic: fix elixir 1.17 compiler warnings in main application 2024-06-19 01:49:59 +02:00
Norm
51f09531c4 Disable gzip compression in Caddyfile
Currently Akkoma doesn't have any proper mitigations against BREACH,
which exploits the use of HTTP compression to exfiltrate sensitive data.
(see: https://akkoma.dev/AkkomaGang/akkoma/pulls/721#issuecomment-11487)

To err on the side of caution, disable gzip compression for now until we
can confirm that there's some sort of mitigation in place (whether that
would be Heal-The-Breach on the Caddy side or any Akkoma-side
mitigations).
2024-06-17 23:13:55 -04:00
Norm
962847fdc3 Uncomment media subdomain settings in Caddyfile
Now that a media subdomain is strongly recommended for security reasons,
there is no reason for them to be commented out by default.
2024-06-17 23:12:55 -04:00
Norm
83aab0859a Move prune changelog entries to correct version 2024-06-17 22:41:40 -04:00
Weblate
eb2b0d26e4 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
91870590ec Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
c442877c25 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
16af0bad55 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
16ee6ed500 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
babf5df0e7 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
5767f59294 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
72ce0b7759 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
0cf9b44179 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
3cf335c4d0 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
1556e2be8e Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
629077dce4 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
50256af6f6 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
c5d36d9679 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
fb4c5b97c7 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
a715cf4b3c Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
693a6486da Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:03 +00:00
Weblate
4e353f0335 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:03 +00:00
floatingghost
5992e8bb16 Merge pull request 'Update http-signatures dep, allow created header' (#800) from created-pseudoheader into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/800
2024-06-17 21:52:59 +00:00
Floatingghost
57273754b7 we may as well handle (expires) as well 2024-06-17 22:30:14 +01:00
floatingghost
59bfdf2ca4 Merge pull request 'Add limit CLI flags to prune jobs' (#655) from Oneric/akkoma:prune-batch into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/655
2024-06-17 20:47:53 +00:00
floatingghost
a9e2e31e3b Merge pull request 'Remove proxy_remote vestiges' (#805) from Oneric/akkoma:purge_proxy_remote into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/805
2024-06-17 20:47:11 +00:00
Oneric
bf8f493ffd Remove proxy_remote vestiges
Ever since 364b6969eb
this setting wasn't used by the backend and a noop.
The stated usecase is better served by setting the base_url
to a local subdomain and using proxying in nginx/Caddy/...
2024-06-16 01:21:52 +02:00
Floatingghost
3b197503d2 me me stupid person 2024-06-15 15:30:02 +01:00
Floatingghost
c0b2bba55e revert subdomain change until i can look at why i did that 2024-06-15 15:14:42 +01:00
Floatingghost
4b765b1886 mix format 2024-06-15 15:06:28 +01:00
Floatingghost
cba2c5725f Filter emoji reaction accounts by domain blocks 2024-06-15 15:05:52 +01:00
Floatingghost
2b96c3b224 Update http-signatures dep, allow created header 2024-06-12 18:40:44 +01:00
floatingghost
b03edb4ff4 Merge pull request 'Fix StealEmoji’s max size check' (#793) from Oneric/akkoma:emojistealer_contentlength into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/793
2024-06-12 17:09:05 +00:00
floatingghost
5b75fb2a2f Merge pull request 'pool timeouts/rich media cherry-picks' (#796) from pool-timeouts into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/796
2024-06-12 17:08:06 +00:00
Floatingghost
4d6fb43cbd No need to spawn() any more 2024-06-12 02:09:24 +01:00
Floatingghost
ad52135bf5 Convert rich media backfill to oban task 2024-06-11 18:06:51 +01:00
Floatingghost
28d357f52c add diagnostic script 2024-06-10 15:10:47 +01:00
Floatingghost
9c5feb81aa fix tests 2024-06-09 21:26:29 +01:00
Floatingghost
a360836ce3 fix oembed test 2024-06-09 21:17:12 +01:00
Floatingghost
840c70c4fa remove prints 2024-06-09 18:52:09 +01:00
Floatingghost
c65379afea attempt to fix some tests 2024-06-09 18:45:38 +01:00
Floatingghost
16bed0562d Fix tests 2024-06-09 18:28:00 +01:00
Mark Felder
a801dd7b07 Fix module struct matching 2024-06-09 17:38:28 +01:00
Mark Felder
1e86da43f5 Credo 2024-06-09 17:38:24 +01:00
Mark Felder
411831458c Credo 2024-06-09 17:38:18 +01:00
Mark Felder
56463b2121 Fix compile warning
warning: "else" clauses will never match because all patterns in "with" will always match
  lib/pleroma/web/rich_media/parser/ttl/opengraph.ex:10
2024-06-09 17:38:12 +01:00
Mark Felder
2f5eb79473 Mastodon API: Remove deprecated GET /api/v1/statuses/:id/card endpoint
Removed back in 2019

https://github.com/mastodon/mastodon/pull/11213
2024-06-09 17:38:06 +01:00
Mark Felder
f4daa90bd8 Remove test validating missing descriptions are returned as an empty string 2024-06-09 17:37:59 +01:00
Mark Felder
688748b531 Improve test description 2024-06-09 17:37:32 +01:00
Mark Felder
2e5aa71176 Rich Media Cards are fetched asynchonously and not guaranteed to be available on first post render 2024-06-09 17:37:22 +01:00
Mark Felder
7ca655a999 Rich Media Cards are cached by URL not per status 2024-06-09 17:36:57 +01:00
Mark Felder
4746f98851 Fix broken Rich Media parsing when the image URL is a relative path 2024-06-09 17:36:28 +01:00
Mark Felder
765c7e98d2 Respect the TTL returned in OpenGraph tags 2024-06-09 17:36:15 +01:00
Mark Felder
ddbe989461 Fix broken tests 2024-06-09 17:35:47 +01:00
Floatingghost
4a3dd5f65e lost in cherry-pick 2024-06-09 17:34:41 +01:00
Mark Felder
bfe4152385 Increase the :max_body for Rich Media to 5MB
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
2024-06-09 17:34:29 +01:00
Mark Felder
5da9cbd8a5 RichMedia refactor
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.

Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.

Implementation notes:

- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering

Overall performance of timelines and creating new posts which contain URLs is greatly improved.
2024-06-09 17:33:48 +01:00
Floatingghost
a924e117fd Add pool timeouts 2024-06-09 17:20:29 +01:00
floatingghost
d1c4b97613 Merge pull request 'Raise minimum PostgreSQL version to 12' (#786) from Oneric/akkoma:psql-min-ver into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/786
2024-06-07 16:53:22 +00:00
Oneric
2180d068ae Raise log level for start failures 2024-06-07 16:21:21 +02:00
Oneric
a3840e7d1f Raise minimum PostgreSQL version to 12
This lets us:
 - avoid issues with broken hash indices for PostgreSQL <10
 - drop runtime checks and legacy codepaths for <11 in db search
 - always enable custom query plans for performance optimisation

PostgreSQL 11 is already EOL since 2023-11-09, so
in theory everyone should already have moved on to 12 anyway.
2024-06-07 16:21:09 +02:00
Oneric
b17d3dc6d8 Fix changelog
Apparently got jumbled during some rebase(s)
2024-06-07 16:20:34 +02:00
floatingghost
f8f364d36d Merge pull request 'Handle errors from HTTP requests gracefully' (#791) from wp-embeds into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/791
2024-06-07 12:58:58 +00:00
floatingghost
329d8fcba8 Merge pull request 'Update PGTune recommendations' (#795) from norm/akkoma:pgtune into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/795
2024-06-07 12:57:00 +00:00
Norm
e2860e5292 Update PGTune recommendations
From experience, setting DB type to "Online transaction processing
system" seems to give the most optimal configuration in terms of
performance.

I also increased the recomended max connections to 25-30 as that leaves
some room for maintenance tasks to run without running out of
connections.

Finally, I removed the example configs since they're probably out of
date and I think it's better to direct people to use PGTune instead.
2024-06-06 12:18:51 -04:00
Oneric
df27567d99 mrf/steal_emoji: display download_unknown_size in admin-fe
Fixes omission in d6d838cbe8
2024-06-05 20:14:10 +02:00
Oneric
be5440c5e8 mrf/steal_emoji: fix size limit check
Headers are strings, but this expected to already get an int
thus always failing the comparison if the header was set.

Fixes mistake in d6d838cbe8
2024-06-05 20:11:53 +02:00
Oneric
68fe0a9633 test: fix content-length value type
All headers are strings, always.
In this case it didn't matter atm,
but let’s not provide confusing examples.
2024-06-05 19:59:59 +02:00
Floatingghost
0f65dd3ebe remove pointless logger 2024-06-04 14:34:59 +01:00
Floatingghost
38d09cb0ce remove now-pointless clause 2024-06-04 14:34:18 +01:00
Floatingghost
c9a03af7c1 Move rescue to the HTTP request itself 2024-06-04 14:30:16 +01:00
Floatingghost
0f7ae0fa21 am i baka 2024-06-04 14:26:33 +01:00
Floatingghost
30e13a8785 Don't error on rich media fail 2024-06-04 14:21:40 +01:00
Floatingghost
778b213945 enqueue pin fetches after changeset validation 2024-06-01 08:25:35 +01:00
Oneric
bed7ff8e89 mix: consistently use shell_info and shell_error
Logger output being visible depends on user configuration, but most of
the prints in mix tasks should always be shown. When running inside a
mix shell, it’s probably preferable to send output directly to it rather
than using raw IO.puts and we already have shell_* functions for this,
let’s use them everywhere.
2024-05-31 17:17:42 +02:00
Oneric
70cd5f91d8 dbprune/activites: prune array activities first
This query is less costly; if something goes wrong or gets aborted later
at least this part will arelady be done.
2024-05-31 17:16:40 +02:00
Oneric
aeaebb566c dbprune: allow splitting array and single activity prunes
The former is typically just a few reports; it doesn't make sense to
rerun it over and over again in batched prunes or if a full prune OOMed.
2024-05-31 17:16:40 +02:00
Oneric
5751637926 dbprune: use query! 2024-05-31 17:16:40 +02:00
Oneric
24bab63cd8 dbprune: add more logs
Pruning can go on for a long time; give admins some insight into that
something is happening to make it less frustrating and to make it easier
which part of the process is stalled should this happen.

Again most of the changes are merely reindents;
review with whitespace changes hidden recommended.
2024-05-31 17:16:40 +02:00
Oneric
1d4c212441 dbprune: shortcut array activity search
This brought down query costs from 7,953,740.90 to 47,600.97
2024-05-31 17:16:40 +02:00
Oneric
6e7cbf1885 Test both standalone and flag mode for pruning orphaned activities 2024-05-31 17:16:40 +02:00
Oneric
225f87ad62 Also allow limiting the initial prune_object
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.

The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().

Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
2024-05-31 17:16:40 +02:00
Oneric
e64f031167 Log number of deleted rows in prune_orphaned_activities
This gives feedback when to stop rerunning limited batches.

Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
2024-05-31 17:16:40 +02:00
Oneric
fa52093bac Add standalone prune_orphaned_activities CLI task
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.

If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
2024-05-31 17:16:40 +02:00
Oneric
3126d15ffc refactor: move prune_orphaned_activities into own function
No logic changes. Preparation for standalone orphan pruning.
2024-05-31 17:16:39 +02:00
floatingghost
8f97c15b07 Merge pull request 'Preserve Meilisearch’s result ranking' (#772) from Oneric/akkoma:search-meili-order into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/772
2024-05-31 14:12:05 +00:00
Floatingghost
3af0c53a86 use proper workers for fetching pins instead of an ad-hoc task (#788)
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/788
Co-authored-by: Floatingghost <hannah@coffee-and-dreams.uk>
Co-committed-by: Floatingghost <hannah@coffee-and-dreams.uk>
2024-05-31 08:58:52 +00:00
Oneric
fc7e07f424 meilisearch: enable using search_key
Using only the admin key works as well currently
and Akkoma needs to know the admin key to be able
to add new entries etc. However the Meilisearch
key descriptions suggest the admin key is not
supposed to be used for searches, so let’s not.

For compatibility with existings configs, search_key remains optional.
2024-05-29 23:17:27 +00:00
Oneric
59685e25d2 meilisearch: show keys by name not description
This makes show-key’s output match our documentation as of Meilisearch
1.8.0-8-g4d5971f343c00d45c11ef0cfb6f61e83a8508208. Since I’m not sure
if older versions maybe only provided description, it will fallback to
the latter if no name parameter exists.
2024-05-29 23:17:27 +00:00
Oneric
65aeaefa41 meilisearch: respect meili’s result ranking
Meilisearch is already configured to return results sorted by a
particular ranking configured in the meilisearch CLI task.
Resorting the returned top results by date partially negates this and
runs counter to what someone with tweaked settings expects.

Issue and fix identified by AdamK2003 in
https://akkoma.dev/AkkomaGang/akkoma/pulls/579
But instead of using a O(n^2) resorting, this commit directly
retrieves results in the correct order from the database.

Closes: https://akkoma.dev/AkkomaGang/akkoma/pulls/579
2024-05-29 23:17:27 +00:00
Oneric
5d6cb6a459 meilisearch: remove duplicate preload 2024-05-29 23:17:27 +00:00
floatingghost
8afc3bee7a Merge pull request 'Use /var/tmp for media cache path' (#776) from norm/akkoma:nginx-var-tmp into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/776
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
2024-05-28 02:05:17 +00:00
floatingghost
72871d4514 Merge pull request 'Drop unused indices' (#767) from Oneric/akkoma:purge-unused-indices into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/767
2024-05-28 01:35:18 +00:00
floatingghost
72af38c0e9 Merge pull request 'migrate CI config to v2' (#785) from woodpecker-v2 into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/785
2024-05-27 03:32:40 +00:00
Floatingghost
ae19fd90c9 use elixir 1.16 for format checks 2024-05-27 04:07:44 +01:00
Floatingghost
66b3248dd3 mix tests probably shouldn't be async 2024-05-27 04:03:13 +01:00
Floatingghost
73ead8656a don't allow emoji formatter to be async 2024-05-27 03:25:18 +01:00
Floatingghost
f32a7fd76a arch is aarch64 now 2024-05-27 03:02:02 +01:00
Floatingghost
4078fd655c migrate CI config to v2 2024-05-27 02:56:05 +01:00
floatingghost
5bdef8c724 Merge pull request 'Allow for attachment to be a single object in user data' (#783) from single-attachment into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/783
2024-05-27 01:44:53 +00:00
floatingghost
cdc918c8f1 Merge pull request 'Document AP and nodeinfo extensions' (#778) from Oneric/akkoma:doc_ap-extensions into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/778
2024-05-27 01:34:58 +00:00
Floatingghost
f15eded3e1 Add extra test case for nonsense field, increase timeouts 2024-05-27 02:09:48 +01:00
Oneric
05eda169fe Document AP and nodeinfo extensions
And while add it point to this via a top-level
FEDERATION.md document as standardised by FEP-67ff.

Also add a few missing descriptions to the config cheatsheet
and move the recently removed C2S extension into an appropiate
subsection.
2024-05-26 19:04:06 +02:00
floatingghost
3ce855cbde Merge pull request 'Fix Exiftool stderr being read as an image description' (#782) from norm/akkoma:fix-exiftool-description into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/782
2024-05-26 16:11:12 +00:00
Floatingghost
da67e69af5 Allow for attachment to be a single object in user data 2024-05-26 17:09:26 +01:00
Norm
c2d3221be3 Fix Exiftool stderr being read as an image description
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/773
2024-05-23 14:44:17 -04:00
Floatingghost
5e92f955ac bump version 2024-05-22 19:42:25 +01:00
Floatingghost
b72127b45a Merge remote-tracking branch 'oneric-sec/media-owner' into develop 2024-05-22 19:36:10 +01:00
Oneric
9a91299f96 Don't try to handle non-media objects as media
Trying to display non-media as media crashed the renderer,
but when posting a status with a valid, non-media object id
the post was still created, but then crashed e.g. timeline rendering.
It also crashed C2S inbox reads, so this could not be used to leak
private posts.
2024-05-22 20:30:23 +02:00
Oneric
fbd961c747 Drop activity_type override for uploads
Afaict this was never used, but keeping this (in theory) possible
hinders detecting which objects are actually media uploads and
which proper ActivityPub objects.

It was originally added as part of upload support itself in
02d3dc6869 without being used
and `git log -S:activity_type` and `git log -Sactivity_type:`
don't find any other commits using this.
2024-05-22 20:30:23 +02:00
Oneric
0c2b33458d Restrict media usage to owners
In Mastodon media can only be used by owners and only be associated with
a single post. We currently allow media to be associated with several
posts and until now did not limit their usage in posts to media owners.
However, media update and GET lookup was already limited to owners.
(In accordance with allowing media reuse, we also still allow GET
lookups of media already used in a post unlike Mastodon)

Allowing reuse isn’t problematic per se, but allowing use by non-owners
can be problematic if media ids of private-scoped posts can be guessed
since creating a new post with this media id will reveal the uploaded
file content and alt text.
Given media ids are currently just part of a sequentieal series shared
with some other objects, guessing media ids is with some persistence
indeed feasible.

E.g. sampline some public media ids from a real-world
instance with 112 total and 61 monthly-active users:

  17.465.096  at  t0
  17.472.673  at  t1 = t0 + 4h
  17.473.248  at  t2 = t1 + 20min

This gives about 30 new ids per minute of which most won't be
local media but remote and local posts, poll answers etc.
Assuming the default ratelimit of 15 post actions per 10s, scraping all
media for the 4h interval takes about 84 minutes and scraping the 20min
range mere 6.3 minutes. (Until the preceding commit, post updates were
not rate limited at all, allowing even faster scraping.)
If an attacker can infer (e.g. via reply to a follower-only post not
accessbile to the attacker) some sensitive information was uploaded
during a specific time interval and has some pointers regarding the
nature of the information, identifying the specific upload out of all
scraped media for this timerange is not impossible.

Thus restrict media usage to owners.

Checking ownership just in ActivitDraft would already be sufficient,
since when a scheduled status actually gets posted it goes through
ActivityDraft again, but would erroneously return a success status
when scheduling an illegal post.

Independently discovered and fixed by mint in Pleroma
1afde067b1
2024-05-22 20:30:18 +02:00
Floatingghost
842cac2a50 ensure we mock_global 2024-05-22 19:30:03 +01:00
Lain Soykaf
3e1f5e5372 WebFingerControllerTest: Restore host after test. 2024-05-22 19:27:51 +01:00
marcin mikołajczak
3a21293970 Fix tests
Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
2024-05-22 19:27:31 +01:00
marcin mikołajczak
0d66237205 Fix validate_webfinger when running a different domain for Webfinger
Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
2024-05-22 19:20:02 +01:00
Oneric
6ef6b2a289 Apply rate limits to status updates 2024-05-22 20:18:08 +02:00
Oneric
94e9c8f48a Purge unused media description update on post
In MastoAPI media descriptions are updated via the
media update API not upon post creation or post update.

This functionality was originally added about 6 years ago in
ba93396649 which was part of
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/626 and
https://git.pleroma.social/pleroma/pleroma-fe/-/merge_requests/450.
They introduced image descriptions to the front- and backend,
but predate adoption of Mastodon API.

For a while adding an `descriptions` array on post creation might have
continued to work as an undocumented Pleroma extension to Masto API, but
at latest when OpenAPI specs were added for those endpoints four years
ago in 7803a85d2c, these codepaths ceased
to be used. The API specs don’t list a `descriptions` parameter and
any unknown parameters are stripped out.

The attachments_from_ids function is only called from
ScheduledActivity and ActivityDraft.create with the latter
only being called by CommonAPI.{post,update} whihc in turn
are only called from ScheduledActivity again, MastoAPI controller
and without any attachment or description parameter WelcomeMessage.
Therefore no codepath can contain a descriptions parameter.
2024-05-22 20:18:08 +02:00
Oneric
873aa9da1c activity_draft: mark new/2 as private 2024-05-22 20:18:08 +02:00
Oneric
34a48cb87f scheduled_activity: mark private functions as private
And remove unused due_activities/1
2024-05-22 20:18:08 +02:00
lain
50403351f4 add impostor test for webfinger 2024-05-22 19:17:34 +01:00
Alex Gleason
a953b1d927 Prevent spoofing webfinger 2024-05-22 19:08:37 +01:00
Norm
bb29c5bed2 Update tor/i2p guide
Direct users to add in the appropriate headers and update the listening
port instead of copy/pasting a config that's already outdated and
probably would otherwise have to be synced with the main example nginx
config.
2024-05-16 19:08:02 -04:00
Norm
bc46f3da4c Update mediaproxy howto
Since the configuration options on the nginx side already exist in the
sample config, there's no need to tell users to copy-paste those
settings in again.
2024-05-16 19:06:59 -04:00
Norm
7e709768c3 Use /var/tmp for media cache path in apache/nginx configs
The /var/tmp directory is not mounted as tmpfs unlike /tmp which is
mounted as such on some distros like Fedora or Arch. Since there isn't
really a benefit to having the cache on tmpfs, this change should allow
for a larger cache if needed without worrying about running out of RAM.
2024-05-15 20:42:48 -04:00
floatingghost
76ded10a70 Merge pull request 'Backoff on HTTP requests when 429 is recieved' (#762) from backoff-http into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/762
2024-05-11 04:38:47 +00:00
Floatingghost
4457928e32 duct-tape fix for #438
we really need to make this less manual
2024-05-11 05:30:18 +01:00
floatingghost
ee03149ba1 Merge pull request 'Fix Exiftool migration id' (#763) from Oneric/akkoma:fix-migration-timeline-exifdesc into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/763
2024-05-06 22:51:05 +00:00
Floatingghost
ea6bc8a7c5 add a test for 503-rate-limiting 2024-05-06 23:36:00 +01:00
Floatingghost
bd74693db6 additionally support retry-after values 2024-05-06 23:34:48 +01:00
Oneric
5256678901 Fix Exiftool migration id
Applying works fine with a 20220220135625 version, but it won’t be
rolled back in the right order. Fortunately this action is idempotent
so we can just rename and reapply it with a new id.

To also not break large-scale rollbacks past 2022 for anyone
who already applied it with the old id, keep a stub migration.
2024-05-07 00:16:21 +02:00
Oneric
b7e3d44756 Drop unused indices
This promotes and expands our existing optional migration.
Based on usage statistics from several instances, see:
https://akkoma.dev/AkkomaGang/akkoma/issues/764

activities_hosts is now retained after all since it’s essential
for the "instance" query parameter of *oma’s public timeline to
reliably work in a reasonable amount of time. (Although akkoma-fe has
no support for this feature and apparently barely anyone uses it.)

activities_actor_index was already dropped before in
20221211234352_remove_unused_indices; no need to drop it again.

Birthday indices were introduced in pleroma starting with
20220116183110_add_birthday_to_users which is past the
last common migration 20210416051708.
2024-05-02 00:08:33 +02:00
Floatingghost
010e8c7bb2 where were you when lint fail 2024-04-26 19:28:01 +01:00
Floatingghost
9671cdecdf changelog entry 2024-04-26 19:10:17 +01:00
Floatingghost
f531484063 Merge branch 'develop' into backoff-http 2024-04-26 19:06:18 +01:00
Floatingghost
ec7e9da734 Correct ttl syntax for new cachex 2024-04-26 19:05:12 +01:00
FloatingGhost
3c384c1b76 Add ratelimit backoff to HTTP get 2024-04-26 19:01:12 +01:00
FloatingGhost
2437a3e9ba add test for backoff 2024-04-26 19:01:01 +01:00
FloatingGhost
ad7dcf38a8 Add HTTP backoff cache to respect 429s 2024-04-26 19:00:35 +01:00
ilja
3947012691 Fix warnings
There were two warnings, these are now fixed.

I moved the fonts folder into the css folder. Antother option was to change the relative path,
but it seems that after changing it in the css file, the path got changed back when rebuilding the site.
Maybe it needs to be changed somewhere else, idk, this worked.
2023-05-29 09:10:07 +02:00
ilja
d61b7d4b49 Improve backup restore
CREATE DATABASE was running in a transaction block with CREATE USER. This isn't allowed (any more?).
This is now two separate commands.

I also did some other touch-ups including
* making it OTP-first,
* add backup of static directory because this contains e.g. custom emoji, and
* remove the suggestion for using the setup_db.psql file. The reason is because I fear it causes more confusion than what it's worth.
    * Firstly, OTP installations won't have this file because it's created in /tmp.
    * Secondly, the instance has been reinstalled and thus a new setup_db.psql with different password may have been created, causing only more confusion.
2023-05-29 09:09:56 +02:00
187 changed files with 34872 additions and 31970 deletions

View file

@ -1,4 +1,5 @@
platform: linux/amd64
labels:
platform: linux/amd64
depends_on:
- test
@ -34,7 +35,7 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean"
pipeline:
steps:
# Canonical amd64
debian-bookworm:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612

View file

@ -1,4 +1,5 @@
platform: linux/arm64
labels:
platform: linux/aarch64
depends_on:
- test
@ -34,7 +35,7 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean"
pipeline:
steps:
# Canonical arm64
debian-bookworm:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612

View file

@ -1,4 +1,5 @@
platform: linux/amd64
labels:
platform: linux/amd64
depends_on:
- test
@ -45,7 +46,7 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean"
pipeline:
steps:
docs:
<<: *on-point-release
secrets:

View file

@ -1,4 +1,5 @@
platform: linux/amd64
labels:
platform: linux/amd64
variables:
- &scw-secrets
@ -41,9 +42,9 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean"
pipeline:
steps:
lint:
image: akkoma/ci-base:1.15-otp26
image: akkoma/ci-base:1.16-otp26
<<: *on-pr-open
environment:
MIX_ENV: test

View file

@ -1,4 +1,5 @@
platform: linux/amd64
labels:
platform: linux/amd64
depends_on:
- lint
@ -12,12 +13,6 @@ matrix:
- 25
- 26
include:
- ELIXIR_VERSION: 1.14
OTP_VERSION: 25
- ELIXIR_VERSION: 1.15
OTP_VERSION: 25
- ELIXIR_VERSION: 1.15
OTP_VERSION: 26
- ELIXIR_VERSION: 1.16
OTP_VERSION: 26
@ -73,7 +68,7 @@ services:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
pipeline:
steps:
test:
image: akkoma/ci-base:${ELIXIR_VERSION}-otp${OTP_VERSION}
<<: *on-pr-open

View file

@ -4,7 +4,29 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Unreleased
## UNRELEASED
## BREAKING
- Minimum PostgreSQL version is raised to 12
## Added
- Implement [FEP-67ff](https://codeberg.org/fediverse/fep/src/branch/main/fep/67ff/fep-67ff.md) (federation documentation)
- Meilisearch: it is now possible to use separate keys for search and admin actions
- New standalone `prune_orphaned_activities` mix task with configurable batch limit
- The `prune_objects` mix task now accepts a `--limit` parameter for initial object pruning
## Fixed
- Meilisearch: order of results returned from our REST API now actually matches how Meilisearch ranks results
## Changed
- Refactored Rich Media to cache the content in the database. Fetching operations that could block status rendering have been eliminated.
## 2024.04.1 (Security)
## Fixed
- Issue allowing non-owners to use media objects in posts
- Issue allowing use of non-media objects as attachments and crashing timeline rendering
- Issue allowing webfinger spoofing in certain situations
## 2024.04
@ -37,6 +59,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
- Issue leading to Mastodon bot accounts being rejected
- Scope misdetection of remote posts resulting from not recognising
JSON-LD-compacted forms of public scope; affected e.g. federation with bovine
- Ratelimits encountered when fetching objects are now respected; 429 responses will cause a backoff when we get one.
## Removed
- ActivityPub Client-To-Server write API endpoints have been disabled;

42
FEDERATION.md Normal file
View file

@ -0,0 +1,42 @@
# Federation
## Supported federation protocols and standards
- [ActivityPub](https://www.w3.org/TR/activitypub/) (Server-to-Server)
- [WebFinger](https://webfinger.net/)
- [Http Signatures](https://datatracker.ietf.org/doc/html/draft-cavage-http-signatures)
- [NodeInfo](https://nodeinfo.diaspora.software/)
## Supported FEPs
- [FEP-67ff: FEDERATION](https://codeberg.org/fediverse/fep/src/branch/main/fep/67ff/fep-67ff.md)
- [FEP-f1d5: NodeInfo in Fediverse Software](https://codeberg.org/fediverse/fep/src/branch/main/fep/f1d5/fep-f1d5.md)
- [FEP-fffd: Proxy Objects](https://codeberg.org/fediverse/fep/src/branch/main/fep/fffd/fep-fffd.md)
## ActivityPub
Akkoma mostly follows the server-to-server parts of the ActivityPub standard,
but implements quirks for Mastodon compatibility as well as Mastodon-specific
and custom extensions.
See our documentation and Mastodons federation information
linked further below for details on these quirks and extensions.
Akkoma does not perform JSON-LD processing.
### Required extensions
#### HTTP Signatures
All AP S2S POST requests to Akkoma instances MUST be signed.
Depending on instance configuration the same may be true for GET requests.
## Nodeinfo
Akkoma provides many additional entries in its nodeinfo response,
see the documentation linked below for details.
## Additional documentation
- [Akkomas ActivityPub extensions](https://docs.akkoma.dev/develop/development/ap_extensions/)
- [Akkomas nodeinfo extensions](https://docs.akkoma.dev/develop/development/nodeinfo_extensions/)
- [Mastodons federation requirements](https://github.com/mastodon/mastodon/blob/main/FEDERATION.md)

View file

@ -1,5 +0,0 @@
@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";
/* This file is for your main application CSS */

View file

@ -1,44 +0,0 @@
// If you want to use Phoenix channels, run `mix help phx.gen.channel`
// to get started and then uncomment the line below.
// import "./user_socket.js"
// You can include dependencies in two ways.
//
// The simplest option is to put them in assets/vendor and
// import them using relative paths:
//
// import "../vendor/some-package.js"
//
// Alternatively, you can `npm install some-package --prefix assets` and import
// them using a path starting with the package name:
//
// import "some-package"
//
// Include phoenix_html to handle method=PUT/DELETE in forms and buttons.
import "phoenix_html"
// Establish Phoenix Socket and LiveView configuration.
import {Socket} from "phoenix"
import {LiveSocket} from "phoenix_live_view"
import topbar from "../vendor/topbar"
let csrfToken = document.querySelector("meta[name='csrf-token']").getAttribute("content")
let liveSocket = new LiveSocket("/live", Socket, {
longPollFallbackMs: 2500,
params: {_csrf_token: csrfToken}
})
// Show progress bar on live navigation and form submits
topbar.config({barColors: {0: "#29d"}, shadowColor: "rgba(0, 0, 0, .3)"})
window.addEventListener("phx:page-loading-start", _info => topbar.show(300))
window.addEventListener("phx:page-loading-stop", _info => topbar.hide())
// connect if there are any LiveViews on the page
liveSocket.connect()
// expose liveSocket on window for web console debug logs and latency simulation:
// >> liveSocket.enableDebug()
// >> liveSocket.enableLatencySim(1000) // enabled for duration of browser session
// >> liveSocket.disableLatencySim()
window.liveSocket = liveSocket

View file

@ -1,74 +0,0 @@
// See the Tailwind configuration guide for advanced usage
// https://tailwindcss.com/docs/configuration
const plugin = require("tailwindcss/plugin")
const fs = require("fs")
const path = require("path")
module.exports = {
content: [
"./js/**/*.js",
"../lib/pleroma/**/*.*ex"
],
theme: {
extend: {
colors: {
brand: "#FD4F00",
}
},
},
plugins: [
require("@tailwindcss/forms"),
// Allows prefixing tailwind classes with LiveView classes to add rules
// only when LiveView classes are applied, for example:
//
// <div class="phx-click-loading:animate-ping">
//
plugin(({addVariant}) => addVariant("phx-no-feedback", [".phx-no-feedback&", ".phx-no-feedback &"])),
plugin(({addVariant}) => addVariant("phx-click-loading", [".phx-click-loading&", ".phx-click-loading &"])),
plugin(({addVariant}) => addVariant("phx-submit-loading", [".phx-submit-loading&", ".phx-submit-loading &"])),
plugin(({addVariant}) => addVariant("phx-change-loading", [".phx-change-loading&", ".phx-change-loading &"])),
// Embeds Heroicons (https://heroicons.com) into your app.css bundle
// See your `CoreComponents.icon/1` for more information.
//
plugin(function({matchComponents, theme}) {
let iconsDir = path.join(__dirname, "../deps/heroicons/optimized")
let values = {}
let icons = [
["", "/24/outline"],
["-solid", "/24/solid"],
["-mini", "/20/solid"],
["-micro", "/16/solid"]
]
icons.forEach(([suffix, dir]) => {
fs.readdirSync(path.join(iconsDir, dir)).forEach(file => {
let name = path.basename(file, ".svg") + suffix
values[name] = {name, fullPath: path.join(iconsDir, dir, file)}
})
})
matchComponents({
"hero": ({name, fullPath}) => {
let content = fs.readFileSync(fullPath).toString().replace(/\r?\n|\r/g, "")
let size = theme("spacing.6")
if (name.endsWith("-mini")) {
size = theme("spacing.5")
} else if (name.endsWith("-micro")) {
size = theme("spacing.4")
}
return {
[`--hero-${name}`]: `url('data:image/svg+xml;utf8,${content}')`,
"-webkit-mask": `var(--hero-${name})`,
"mask": `var(--hero-${name})`,
"mask-repeat": "no-repeat",
"background-color": "currentColor",
"vertical-align": "middle",
"display": "inline-block",
"width": size,
"height": size
}
}
}, {values})
})
]
}

View file

@ -1,165 +0,0 @@
/**
* @license MIT
* topbar 2.0.0, 2023-02-04
* https://buunguyen.github.io/topbar
* Copyright (c) 2021 Buu Nguyen
*/
(function (window, document) {
"use strict";
// https://gist.github.com/paulirish/1579671
(function () {
var lastTime = 0;
var vendors = ["ms", "moz", "webkit", "o"];
for (var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) {
window.requestAnimationFrame =
window[vendors[x] + "RequestAnimationFrame"];
window.cancelAnimationFrame =
window[vendors[x] + "CancelAnimationFrame"] ||
window[vendors[x] + "CancelRequestAnimationFrame"];
}
if (!window.requestAnimationFrame)
window.requestAnimationFrame = function (callback, element) {
var currTime = new Date().getTime();
var timeToCall = Math.max(0, 16 - (currTime - lastTime));
var id = window.setTimeout(function () {
callback(currTime + timeToCall);
}, timeToCall);
lastTime = currTime + timeToCall;
return id;
};
if (!window.cancelAnimationFrame)
window.cancelAnimationFrame = function (id) {
clearTimeout(id);
};
})();
var canvas,
currentProgress,
showing,
progressTimerId = null,
fadeTimerId = null,
delayTimerId = null,
addEvent = function (elem, type, handler) {
if (elem.addEventListener) elem.addEventListener(type, handler, false);
else if (elem.attachEvent) elem.attachEvent("on" + type, handler);
else elem["on" + type] = handler;
},
options = {
autoRun: true,
barThickness: 3,
barColors: {
0: "rgba(26, 188, 156, .9)",
".25": "rgba(52, 152, 219, .9)",
".50": "rgba(241, 196, 15, .9)",
".75": "rgba(230, 126, 34, .9)",
"1.0": "rgba(211, 84, 0, .9)",
},
shadowBlur: 10,
shadowColor: "rgba(0, 0, 0, .6)",
className: null,
},
repaint = function () {
canvas.width = window.innerWidth;
canvas.height = options.barThickness * 5; // need space for shadow
var ctx = canvas.getContext("2d");
ctx.shadowBlur = options.shadowBlur;
ctx.shadowColor = options.shadowColor;
var lineGradient = ctx.createLinearGradient(0, 0, canvas.width, 0);
for (var stop in options.barColors)
lineGradient.addColorStop(stop, options.barColors[stop]);
ctx.lineWidth = options.barThickness;
ctx.beginPath();
ctx.moveTo(0, options.barThickness / 2);
ctx.lineTo(
Math.ceil(currentProgress * canvas.width),
options.barThickness / 2
);
ctx.strokeStyle = lineGradient;
ctx.stroke();
},
createCanvas = function () {
canvas = document.createElement("canvas");
var style = canvas.style;
style.position = "fixed";
style.top = style.left = style.right = style.margin = style.padding = 0;
style.zIndex = 100001;
style.display = "none";
if (options.className) canvas.classList.add(options.className);
document.body.appendChild(canvas);
addEvent(window, "resize", repaint);
},
topbar = {
config: function (opts) {
for (var key in opts)
if (options.hasOwnProperty(key)) options[key] = opts[key];
},
show: function (delay) {
if (showing) return;
if (delay) {
if (delayTimerId) return;
delayTimerId = setTimeout(() => topbar.show(), delay);
} else {
showing = true;
if (fadeTimerId !== null) window.cancelAnimationFrame(fadeTimerId);
if (!canvas) createCanvas();
canvas.style.opacity = 1;
canvas.style.display = "block";
topbar.progress(0);
if (options.autoRun) {
(function loop() {
progressTimerId = window.requestAnimationFrame(loop);
topbar.progress(
"+" + 0.05 * Math.pow(1 - Math.sqrt(currentProgress), 2)
);
})();
}
}
},
progress: function (to) {
if (typeof to === "undefined") return currentProgress;
if (typeof to === "string") {
to =
(to.indexOf("+") >= 0 || to.indexOf("-") >= 0
? currentProgress
: 0) + parseFloat(to);
}
currentProgress = to > 1 ? 1 : to;
repaint();
return currentProgress;
},
hide: function () {
clearTimeout(delayTimerId);
delayTimerId = null;
if (!showing) return;
showing = false;
if (progressTimerId != null) {
window.cancelAnimationFrame(progressTimerId);
progressTimerId = null;
}
(function loop() {
if (topbar.progress("+.1") >= 1) {
canvas.style.opacity -= 0.05;
if (canvas.style.opacity <= 0.05) {
canvas.style.display = "none";
fadeTimerId = null;
return;
}
}
fadeTimerId = window.requestAnimationFrame(loop);
})();
},
};
if (typeof module === "object" && typeof module.exports === "object") {
module.exports = topbar;
} else if (typeof define === "function" && define.amd) {
define(function () {
return topbar;
});
} else {
this.topbar = topbar;
}
}.call(this, window, document));

View file

@ -63,7 +63,6 @@ config :pleroma, Pleroma.Upload,
uploader: Pleroma.Uploaders.Local,
filters: [],
link_name: false,
proxy_remote: false,
filename_display_max_length: 30,
base_url: nil,
allowed_mime_types: ["image", "audio", "video"]
@ -189,8 +188,10 @@ config :pleroma, :http,
receive_timeout: :timer.seconds(15),
proxy_url: nil,
user_agent: :default,
pool_size: 50,
adapter: []
pool_size: 10,
adapter: [],
# see: https://hexdocs.pm/finch/Finch.html#start_link/1
pool_max_idle_time: :timer.seconds(30)
config :pleroma, :instance,
name: "Akkoma",
@ -437,8 +438,12 @@ config :pleroma, :rich_media,
Pleroma.Web.RichMedia.Parsers.TwitterCard,
Pleroma.Web.RichMedia.Parsers.OEmbed
],
failure_backoff: :timer.minutes(20),
ttl_setters: [Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl]
failure_backoff: 60_000,
ttl_setters: [
Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl,
Pleroma.Web.RichMedia.Parser.TTL.Opengraph
],
max_body: 5_000_000
config :pleroma, :media_proxy,
enabled: false,
@ -576,7 +581,9 @@ config :pleroma, Oban,
mute_expire: 5,
search_indexing: 10,
nodeinfo_fetcher: 1,
database_prune: 1
database_prune: 1,
rich_media_backfill: 2,
rich_media_expiration: 2
],
plugins: [
Oban.Plugins.Pruner,
@ -592,7 +599,8 @@ config :pleroma, :workers,
retries: [
federator_incoming: 5,
federator_outgoing: 5,
search_indexing: 2
search_indexing: 2,
rich_media_backfill: 3
],
timeout: [
activity_expiration: :timer.seconds(5),
@ -614,7 +622,8 @@ config :pleroma, :workers,
mute_expire: :timer.seconds(5),
search_indexing: :timer.seconds(5),
nodeinfo_fetcher: :timer.seconds(10),
database_prune: :timer.minutes(10)
database_prune: :timer.minutes(10),
rich_media_backfill: :timer.seconds(30)
]
config :pleroma, Pleroma.Formatter,
@ -813,8 +822,10 @@ config :pleroma, :modules, runtime_dir: "instance/modules"
config :pleroma, configurable_from_database: false
config :pleroma, Pleroma.Repo,
parameters: [gin_fuzzy_search_limit: "500"],
prepare: :unnamed
parameters: [
gin_fuzzy_search_limit: "500",
plan_cache_mode: "force_custom_plan"
]
config :pleroma, :majic_pool, size: 2
@ -898,26 +909,6 @@ config :pleroma, :argos_translate,
command_argospm: "argospm",
strip_html: true
config :esbuild,
version: "0.17.11",
pleroma: [
args:
~w(js/app.js --bundle --target=es2017 --outdir=../priv/static/assets --external:/fonts/* --external:/images/*),
cd: Path.expand("../assets", __DIR__),
env: %{"NODE_PATH" => Path.expand("../deps", __DIR__)}
]
config :tailwind,
version: "3.4.0",
pleroma: [
args: ~w(
--config=tailwind.config.js
--input=css/app.css
--output=../priv/static/assets/app.css
),
cd: Path.expand("../assets", __DIR__)
]
# Import environment specific config. This must remain at the bottom
# of this file so it overrides the configuration defined above.
import_config "#{Mix.env()}.exs"

View file

@ -118,14 +118,6 @@ config :pleroma, :config_description, [
"font"
]
},
%{
key: :proxy_remote,
type: :boolean,
description: """
Proxy requests to the remote uploader.\n
Useful if media upload endpoint is not internet accessible.
"""
},
%{
key: :filename_display_max_length,
type: :integer,
@ -2525,7 +2517,6 @@ config :pleroma, :config_description, [
group: :pleroma,
key: :emoji,
type: :group,
description: "Configuration options related to emoji",
children: [
%{
key: :shortcode_globs,
@ -2562,7 +2553,7 @@ config :pleroma, :config_description, [
key: :shared_pack_cache_seconds_per_file,
label: "Shared pack cache s/file",
type: :integer,
description:
descpiption:
"When an emoji pack is shared, the archive is created and cached in memory" <>
" for this amount of seconds multiplied by the number of files.",
suggestions: [60]
@ -2718,8 +2709,8 @@ config :pleroma, :config_description, [
%{
key: :pool_size,
type: :integer,
description: "Number of concurrent outbound HTTP requests to allow. Default 50.",
suggestions: [50]
description: "Number of concurrent outbound HTTP requests to allow PER HOST. Default 10.",
suggestions: [10]
},
%{
key: :adapter,
@ -2742,6 +2733,13 @@ config :pleroma, :config_description, [
]
}
]
},
%{
key: :pool_max_idle_time,
type: :integer,
description:
"Number of seconds to retain an HTTP pool; pool will remain if actively in use. Default 30 seconds (in ms).",
suggestions: [30_000]
}
]
},

View file

@ -63,7 +63,8 @@ config :tesla, adapter: Tesla.Mock
config :pleroma, :rich_media,
enabled: false,
ignore_hosts: [],
ignore_tld: ["local", "localdomain", "lan"]
ignore_tld: ["local", "localdomain", "lan"],
max_body: 2_000_000
config :pleroma, :instance,
multi_factor_authentication: [
@ -141,6 +142,8 @@ config :phoenix, :plug_init_mode, :runtime
config :pleroma, :instances_favicons, enabled: false
config :pleroma, :instances_nodeinfo, enabled: false
config :pleroma, Pleroma.Web.RichMedia.Backfill, provider: Pleroma.Web.RichMedia.Backfill
if File.exists?("./config/test.secret.exs") do
import_config "test.secret.exs"
else

View file

@ -0,0 +1,10 @@
if [ "$#" -ne 2 ]; then
echo "Usage: binary-leak-checker.sh <nodename> <erlang cookie>"
exit 1
fi
echo "The command you want to run is:
:recon.bin_leak(10)
"
iex --sname debug --remsh $1 --erl "-setcookie $2"

View file

@ -50,9 +50,39 @@ This will prune remote posts older than 90 days (configurable with [`config :ple
- `--keep-threads` - Don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...). It also wont delete posts when at least one of the posts in that thread is kept (e.g. because one of the posts has seen recent activity).
- `--keep-non-public` - Keep non-public posts like DM's and followers-only, even if they are remote.
- `--limit` - limits how many remote posts get pruned. This limit does **not** apply to any of the follow up jobs. If wanting to keep the database load in check it is thus advisable to run the standalone `prune_orphaned_activities` task with a limit afterwards instead of passing `--prune-orphaned-activities` to this task.
- `--prune-orphaned-activities` - Also prune orphaned activities afterwards. Activities are things like Like, Create, Announce, Flag (aka reports)... They can significantly help reduce the database size.
- `--vacuum` - Run `VACUUM FULL` after the objects are pruned. This should not be used on a regular basis, but is useful if your instance has been running for a long time before pruning.
## Prune orphaned activities from the database
This will prune activities which are no longer referenced by anything.
Such activities might be the result of running `prune_objects` without `--prune-orphaned-activities`.
The same notes and warnings apply as for `prune_objects`.
The task will print out how many rows were freed in total in its last
line of output in the form `Deleted 345 rows`.
When running the job in limited batches this can be used to determine
when all orphaned activities have been deleted.
=== "OTP"
```sh
./bin/pleroma_ctl database prune_orphaned_activities [option ...]
```
=== "From Source"
```sh
mix pleroma.database prune_orphaned_activities [option ...]
```
### Options
- `--limit n` - Only delete up to `n` activities in each query making up this job, i.e. if this job runs two queries at most `2n` activities will be deleted. Running this task repeatedly in limited batches can help maintain the instances responsiveness while still freeing up some space.
- `--no-singles` - Do not delete activites referencing single objects
- `--no-arrays` - Do not delete activites referencing an array of objects
## Create a conversation for all existing DMs
Can be safely re-run

View file

@ -4,12 +4,12 @@
1. Stop the Akkoma service.
2. Go to the working directory of Akkoma (default is `/opt/akkoma`)
3. Run[¹] `sudo -Hu postgres pg_dump -d akkoma --format=custom -f </path/to/backup_location/akkoma.pgdump>` (make sure the postgres user has write access to the destination file)
4. Copy `akkoma.pgdump`, `config/prod.secret.exs`[²], `config/setup_db.psql` (if still available) and the `uploads` folder to your backup destination. If you have other modifications, copy those changes too.
3. Run `sudo -Hu postgres pg_dump -d akkoma --format=custom -f </path/to/backup_location/akkoma.pgdump>`[¹] (make sure the postgres user has write access to the destination file)
4. Copy `akkoma.pgdump`, `config/config.exs`[²], `uploads` folder, and [static directory](../configuration/static_dir.md) to your backup destination. If you have other modifications, copy those changes too.
5. Restart the Akkoma service.
[¹]: We assume the database name is "akkoma". If not, you can find the correct name in your config files.
[²]: If you've installed using OTP, you need `config/config.exs` instead of `config/prod.secret.exs`.
[¹]: We assume the database name is "akkoma". If not, you can find the correct name in your configuration files.
[²]: If you have a from source installation, you need `config/prod.secret.exs` instead of `config/config.exs`. The `config/config.exs` file also exists, but in case of from source installations, it only contains the default values and it is tracked by Git, so you don't need to back it up.
## Restore/Move
@ -17,19 +17,16 @@
2. Stop the Akkoma service.
3. Go to the working directory of Akkoma (default is `/opt/akkoma`)
4. Copy the above mentioned files back to their original position.
5. Drop the existing database and user if restoring in-place[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';` `sudo -Hu postgres psql -c 'DROP USER akkoma;'`
6. Restore the database schema and akkoma role using either of the following options
* You can use the original `setup_db.psql` if you have it[²]: `sudo -Hu postgres psql -f config/setup_db.psql`.
* Or recreate the database and user yourself (replace the password with the one you find in the config file) `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-config-file>'; CREATE DATABASE akkoma OWNER akkoma;"`.
5. Drop the existing database and user[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';` `sudo -Hu postgres psql -c 'DROP USER akkoma;'`
6. Restore the database schema and akkoma role[¹] (replace the password with the one you find in the configuration file), `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-configuration-file>';"` `sudo -Hu postgres psql -c "CREATE DATABASE akkoma OWNER akkoma;"`.
7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
8. If you installed a newer Akkoma version, you should run `MIX_ENV=prod mix ecto.migrate`[³]. This task performs database migrations, if there were any.
8. If you installed a newer Akkoma version, you should run the database migrations `./bin/pleroma_ctl migrate`[²].
9. Restart the Akkoma service.
10. Run `sudo -Hu postgres vacuumdb --all --analyze-in-stages`. This will quickly generate the statistics so that postgres can properly plan queries.
11. If setting up on a new server configure Nginx by using the `installation/akkoma.nginx` config sample or reference the Akkoma installation guide for your OS which contains the Nginx configuration instructions.
11. If setting up on a new server, configure Nginx by using the `installation/nginx/akkoma.nginx` configuration sample or reference the Akkoma installation guide which contains the Nginx configuration instructions.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files.
[²]: You can recreate the `config/setup_db.psql` by running the `mix pleroma.instance gen` task again. You can ignore most of the questions, but make the database user, name, and password the same as found in your backed up config file. This will also create a new `config/generated_config.exs` file which you may delete as it is not needed.
[³]: Prefix with `MIX_ENV=prod` to run it using the production config file.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your configuration files.
[²]: If you have a from source installation, the command is `MIX_ENV=prod mix ecto.migrate`. Note that we prefix with `MIX_ENV=prod` to use the `config/prod.secret.exs` configuration file.
## Remove

View file

@ -63,6 +63,8 @@ To add configuration to your config file, you can copy it from the base config.
* `local_bubble`: Array of domains representing instances closely related to yours. Used to populate the `bubble` timeline. e.g `["example.com"]`, (default: `[]`)
* `languages`: List of Language Codes used by the instance. This is used to try and set a default language from the frontend. It will try and find the first match between the languages set here and the user's browser languages. It will default to the first language in this setting if there is no match.. (default `["en"]`)
* `export_prometheus_metrics`: Enable prometheus metrics, served at `/api/v1/akkoma/metrics`, requiring the `admin:metrics` oauth scope.
* `privileged_staff`: Set to `true` to give moderators access to a few higher responsibility actions.
* `federated_timeline_available`: Set to `false` to remove access to the federated timeline for all users.
## :database
* `improved_hashtag_timeline`: Setting to force toggle / force disable improved hashtags timeline. `:enabled` forces hashtags to be fetched from `hashtags` table for hashtags timeline. `:disabled` forces object-embedded hashtags to be used (slower). Keep it `:auto` for automatic behaviour (it is auto-set to `:enabled` [unless overridden] when HashtagsTableMigrator completes).
@ -603,7 +605,6 @@ the source code is here: [kocaptcha](https://github.com/koto-bank/kocaptcha). Th
* `link_name`: When enabled Akkoma will add a `name` parameter to the url of the upload, for example `https://instance.tld/media/corndog.png?name=corndog.png`. This is needed to provide the correct filename in Content-Disposition headers
* `base_url`: The base URL to access a user-uploaded file; MUST be configured explicitly.
Using a (sub)domain distinct from the instance endpoint is **strongly** recommended. A good value might be `https://media.myakkoma.instance/media/`.
* `proxy_remote`: If you're using a remote uploader, Akkoma will proxy media requests instead of redirecting to it.
* `proxy_opts`: Proxy options, see `Pleroma.ReverseProxy` documentation.
* `filename_display_max_length`: Set max length of a filename to display. 0 = no limit. Default: 30.

View file

@ -6,37 +6,17 @@ With the `mediaproxy` function you can use nginx to cache this content, so users
## Activate it
* Edit your nginx config and add the following location to your main server block:
```
location /proxy {
return 404;
}
```
* Set up a subdomain for the proxy with its nginx config on the same machine
*(the latter is not strictly required, but for simplicity well assume so)*
* In this subdomains server block add
```
location /proxy {
proxy_cache akkoma_media_cache;
proxy_cache_lock on;
proxy_pass http://localhost:4000;
}
```
Also add the following on top of the configuration, outside of the `server` block:
```
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g inactive=720m use_temp_path=off;
```
If you came here from one of the installation guides, take a look at the example configuration `/installation/nginx/akkoma.nginx`, where this part is already included.
* Edit the nginx config for the upload/MediaProxy subdomain to point to the subdomain that has been set up
* Append the following to your `prod.secret.exs` or `dev.secret.exs` (depends on which mode your instance is running):
```
```elixir
# Replace media.example.td with the subdomain you set up earlier
config :pleroma, :media_proxy,
enabled: true,
proxy_opts: [
redirect_on_failure: true
],
base_url: "https://cache.akkoma.social"
base_url: "https://media.example.tld"
```
You **really** should use a subdomain to serve proxied files; while we will fix bugs resulting from this, serving arbitrary remote content on your main domain namespace is a significant attack surface.

View file

@ -130,59 +130,26 @@ config :pleroma, :http_security,
enabled: false
```
Use this as the Nginx config:
```
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g inactive=720m use_temp_path=off;
# The above already exists in a clearnet instance's config.
# If not, add it.
server {
listen 127.0.0.1:14447;
server_name youri2paddress;
# Comment to enable logs
access_log /dev/null;
error_log /dev/null;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/activity+json application/atom+xml;
client_max_body_size 16m;
location / {
In the Nginx config, add the following into the `location /` block:
```nginx
add_header X-XSS-Protection "0";
add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy same-origin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_pass http://localhost:4000;
client_max_body_size 16m;
}
location /proxy {
proxy_cache akkoma_media_cache;
proxy_cache_lock on;
proxy_ignore_client_abort on;
proxy_pass http://localhost:4000;
}
}
```
reload Nginx:
Change the `listen` directive to the following:
```nginx
listen 127.0.0.1:14447;
```
systemctl stop i2pd.service --no-block
systemctl start i2pd.service
Set `server_name` to your i2p address.
Reload Nginx:
```
systemctl restart i2pd.service --no-block
systemctl reload nginx.service
```
*Notice:* The stop command initiates a graceful shutdown process, i2pd stops after finishing to route transit tunnels (maximum 10 minutes).

View file

@ -74,56 +74,23 @@ config :pleroma, :http_security,
enabled: false
```
Use this as the Nginx config:
```
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g inactive=720m use_temp_path=off;
# The above already exists in a clearnet instance's config.
# If not, add it.
server {
listen 127.0.0.1:8099;
server_name youronionaddress;
# Comment to enable logs
access_log /dev/null;
error_log /dev/null;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/activity+json application/atom+xml;
client_max_body_size 16m;
location / {
In the Nginx config, add the following into the `location /` block:
```nginx
add_header X-XSS-Protection "0";
add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy same-origin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_pass http://localhost:4000;
client_max_body_size 16m;
}
location /proxy {
proxy_cache akkoma_media_cache;
proxy_cache_lock on;
proxy_ignore_client_abort on;
proxy_pass http://localhost:4000;
}
}
```
reload Nginx:
Change the `listen` directive to the following:
```nginx
listen 127.0.0.1:8099;
```
Set the `server_name` to your onion address.
Reload Nginx:
```
systemctl reload nginx
```

View file

@ -4,47 +4,10 @@ Akkoma performance is largely dependent on performance of the underlying databas
## PGTune
[PgTune](https://pgtune.leopard.in.ua) can be used to get recommended settings. Be sure to set "Number of Connections" to 20, otherwise it might produce settings hurtful to database performance. It is also recommended to not use "Network Storage" option.
[PgTune](https://pgtune.leopard.in.ua) can be used to get recommended settings. Make sure to set the DB type to "Online transaction processing system" for optimal performance. Also set the number of connections to between 25 and 30. This will allow each connection to have access to more resources while still leaving some room for running maintenance tasks while the instance is still running.
If your server runs other services, you may want to take that into account. E.g. if you have 4G ram, but 1G of it is already used for other services, it may be better to tell PGTune you only have 3G. In the end, PGTune only provides recomended settings, you can always try to finetune further.
It is also recommended to not use "Network Storage" option.
### Example configurations
If your server runs other services, you may want to take that into account. E.g. if you have 4G ram, but 1G of it is already used for other services, it may be better to tell PGTune you only have 3G.
Here are some configuration suggestions for PostgreSQL 10+.
#### 1GB RAM, 1 CPU
```
shared_buffers = 256MB
effective_cache_size = 768MB
maintenance_work_mem = 64MB
work_mem = 13107kB
```
#### 2GB RAM, 2 CPU
```
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
work_mem = 26214kB
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
```
## Disable generic query plans
When PostgreSQL receives a query, it decides on a strategy for searching the requested data, this is called a query plan. The query planner has two modes: generic and custom. Generic makes a plan for all queries of the same shape, ignoring the parameters, which is then cached and reused. Custom, on the contrary, generates a unique query plan based on query parameters.
By default PostgreSQL has an algorithm to decide which mode is more efficient for particular query, however this algorithm has been observed to be wrong on some of the queries Akkoma sends, leading to serious performance loss. Therefore, it is recommended to disable generic mode.
Akkoma already avoids generic query plans by default, however the method it uses is not the most efficient because it needs to be compatible with all supported PostgreSQL versions. For PostgreSQL 12 and higher additional performance can be gained by adding the following to Akkoma configuration:
```elixir
config :pleroma, Pleroma.Repo,
prepare: :named,
parameters: [
plan_cache_mode: "force_custom_plan"
]
```
A more detailed explaination of the issue can be found at <https://blog.soykaf.com/post/postgresql-elixir-troubles/>.
In the end, PGTune only provides recomended settings, you can always try to finetune further.

View file

@ -33,6 +33,7 @@ indexes faster when it can process many posts in a single batch.
> config :pleroma, Pleroma.Search.Meilisearch,
> url: "http://127.0.0.1:7700/",
> private_key: "private key",
> search_key: "search key",
> initial_indexing_chunk_size: 100_000
Information about setting up meilisearch can be found in the
@ -45,7 +46,7 @@ is hardly usable on a somewhat big instance.
### Private key authentication (optional)
To set the private key, use the `MEILI_MASTER_KEY` environment variable when starting. After setting the _master key_,
you have to get the _private key_, which is actually used for authentication.
you have to get the _private key_ and possibly _search key_, which are actually used for authentication.
=== "OTP"
```sh
@ -57,7 +58,11 @@ you have to get the _private key_, which is actually used for authentication.
mix pleroma.search.meilisearch show-keys <your master key here>
```
You will see a "Default Admin API Key", this is the key you actually put into your configuration file.
You will see a "Default Admin API Key", this is the key you actually put into
your configuration file as `private_key`. You should also see a
"Default Search API key", put this into your config as `search_key`.
If your version of Meilisearch only showed the former,
just leave `search_key` completely unset in Akkoma's config.
### Initial indexing

View file

@ -6,7 +6,7 @@ as soon as the post is received by your instance.
## Nginx
The following are excerpts from the [suggested nginx config](../../../installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work.
The following are excerpts from the [suggested nginx config](https://akkoma.dev/AkkomaGang/akkoma/src/branch/develop/installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work.
A `proxy_cache_path` must be defined, for example:

View file

@ -1033,7 +1033,6 @@ Most of the settings will be applied in `runtime`, this means that you don't nee
- `:pools`
- partially settings inside these keys:
- `:seconds_valid` in `Pleroma.Captcha`
- `:proxy_remote` in `Pleroma.Upload`
- `:upload_limit` in `:instance`
- Params:
@ -1094,7 +1093,6 @@ List of settings which support only full update by subkey:
{"tuple": [":uploader", "Pleroma.Uploaders.Local"]},
{"tuple": [":filters", ["Pleroma.Upload.Filter.Dedupe"]]},
{"tuple": [":link_name", true]},
{"tuple": [":proxy_remote", false]},
{"tuple": [":proxy_opts", [
{"tuple": [":redirect_on_failure", false]},
{"tuple": [":max_body_length", 1048576]},

View file

@ -4,7 +4,6 @@
The following endpoints are additionally present into our actors.
- `oauthRegistrationEndpoint` (`http://litepub.social/ns#oauthRegistrationEndpoint`)
- `uploadMedia` (`https://www.w3.org/ns/activitystreams#uploadMedia`)
### oauthRegistrationEndpoint
@ -12,6 +11,279 @@ Points to MastodonAPI `/api/v1/apps` for now.
See <https://docs.joinmastodon.org/methods/apps/>
## Emoji reactions
Emoji reactions are implemented as a new activity type `EmojiReact`.
A single user is allowed to react multiple times with different emoji to the
same post. However, they may only react at most once with the same emoji.
Repeated reaction from the same user with the same emoji are to be ignored.
Emoji reactions are also distinct from `Like` activities and a user may both
`Like` and react to a post.
!!! note
Misskey also supports emoji reactions, but the implementations differs.
It equates likes and reactions and only allows a single reaction per post.
The emoji is placed in the `content` field of the activity
and the `object` property points to the note reacting to.
Emoji can either be any Unicode emoji sequence or a custom emoji.
The latter must place their shortcode, including enclosing colons,
into `content` and put the emoji object inside the `tag` property.
The `tag` property MAY be omitted for Unicode emoji.
An example reaction with a Unicode emoji:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "EmojiReact",
"id": "https://example.org/activities/23143872a0346141",
"actor": "https://example.org/users/akko",
"nickname": "akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"content": "🧡",
"object": "https://remote.example/objects/9f0e93499d8314a9"
}
```
An example reaction with a custom emoji:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "EmojiReact",
"id": "https://example.org/activities/d75586dec0541650",
"actor": "https://example.org/users/akko",
"nickname": "akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"content": ":mouse:",
"object": "https://remote.example/objects/9f0e93499d8314a9",
"tag": [{
"type": "Emoji",
"id": null,
"name": "mouse",
"icon": {
"type": "Image",
"url": "https://example.org/emoji/mouse/mouse.png"
}
}]
}
```
!!! note
Although an emoji reaction can only contain a single emoji,
for compatibility with older versions of Pleroma and Akkoma,
it is recommended to wrap the emoji object in a single-element array.
When reacting with a remote custom emoji do not include the remote domain in `content`s shortcode
*(unlike in our REST API which needs the domain)*:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "EmojiReact",
"id": "https://example.org/activities/7993dcae98d8d5ec",
"actor": "https://example.org/users/akko",
"nickname": "akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"content": ":hug:",
"object": "https://remote.example/objects/9f0e93499d8314a9",
"tag": [{
"type": "Emoji",
"id": "https://other.example/emojis/hug",
"name": "hug",
"icon": {
"type": "Image",
"url": "https://other.example/files/b71cea432b3fad67.webp"
}
}]
}
```
Emoji reactions can be retracted using a standard `Undo` activity:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"http://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "Undo",
"id": "http://example.org/activities/4685792e-efb6-4309-b508-ae4f355dd695",
"actor": "https://example.org/users/akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"object": "https://example.org/activities/23143872a0346141"
}
```
## User profile backgrounds
Akkoma federates user profile backgrounds the same way as Sharkey.
An actors ActivityPub representation contains an additional
`backgroundUrl` property containing an `Image` object. This property
belongs to the `"sharkey": "https://joinsharkey.org/ns#"` namespace.
## Quote Posts
Akkoma allows referencing a single other note as a quote,
which will be prominently displayed in the interface.
The quoted post is referenced by its ActivityPub id in the `quoteUri` property.
!!! note
Old Misskey only understood and modern Misskey still prefers
the `_misskey_quote` property for this. Similar some other older
software used `quoteUrl` or `quoteURL`.
All current implementations with quote support understand `quoteUri`.
Example:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "Note",
"id": "https://example.org/activities/85717e587f95d5c0",
"actor": "https://example.org/users/akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"context": "https://example.org/contexts/1",
"content": "Look at that!",
"quoteUri": "http://remote.example/status/85717e587f95d5c0",
"contentMap": {
"en": "Look at that!"
},
"source": {
"content": "Look at that!",
"mediaType": "text/plain"
},
"published": "2024-04-06T23:40:28Z",
"updated": "2024-04-06T23:40:28Z",
"attachemnt": [],
"tag": []
}
```
## Threads
Akkoma assigns all posts of the same thread the same `context`. This is a
standard ActivityPub property but its meaning is left vague. Akkoma will
always treat posts with identical `context` as part of the same thread.
`context` must not be assumed to hold any meaning or be dereferencable.
Incoming posts without `context` will be assigned a new context.
!!! note
Mastodon uses the non-standard `conversation` property for the same purpose
*(named after an older OStatus property)*. For incoming posts without
`context` but with `converstions` Akkoma will use the value from
`conversations` to fill in `context`.
For outgoing posts Akkoma will duplicate the context into `conversation`.
## Post Source
Unlike Mastodon, Akkoma supports drafting posts in multiple source formats
besides plaintext, like Markdown or MFM. The original input is preserved
in the standard ActivityPub `source` property *(not supported by Mastodon)*.
Still, `content` will always be present and contain the prerendered HTML form.
Supported `mediaType` include:
- `text/plain`
- `text/markdown`
- `text/bbcode`
- `text/x.misskeymarkdown`
## Post Language
!!! note
This is also supported in and compatible with Mastodon, but since
joinmastodon.org doesnt document it yet it is included here.
[GoToSocial](https://docs.gotosocial.org/en/latest/federation/federating_with_gotosocial/#content-contentmap-and-language)
has a more refined version of this which can correctly deal with multiple language entries.
A post can indicate its language by including a `contentMap` object
which contains a sub key named after the languages ISO 639-1 code
and its content identical to the posts `content` field.
Currently Akkoma, just like Mastodon, only properly supports a single language entry,
in case of multiple entries a random language will be picked.
Furthermore, Akkoma currently only reads the `content` field
and never the value from `contentMap`.
## Local post scope
Post using this scope will never federate to other servers
but for the sake of completeness it is listed here.
In addition to the usual scopes *(public, unlisted, followers-only, direct)*
Akkoma supports an “unlisted” post scope. Such posts will not federate to
other instances and only be shown to logged-in users on the same instance.
It is included into the local timeline.
This may be useful to discuss or announce instance-specific policies and topics.
A post is addressed to the local scope by including `<base url of instance>/#Public`
in its `to` field. E.g. if the instance is on `https://example.org` it would use
`https://example.org/#Public`.
An implementation creating a new post MUST NOT address both the local and
general public scope `as:Public` at the same time. A post addressing the local
scope MUST NOT be sent to other instances or be possible to fetch by other
instances regardless of potential other listed addressees.
When receiving a remote post addressing both the public scope and what appears
to be a local-scope identifier, the post SHOULD be treated without assigning any
special meaning to the potential local-scope identifier.
!!! note
Misskey-derivatives have a similar concept of non-federated posts,
however those are also shown publicly on the local web interface
and are thus visible to non-members.
## List post scope
Messages originally addressed to a custom list will contain
a `listMessage` field with an unresolvable pseudo ActivityPub id.
# Deprecated and Removed Extensions
The following extensions were used in the past but have been dropped.
Documentation is retained here as a reference and since old objects might
still contains related fields.
## Actor endpoints
The following endpoints used to be present:
- `uploadMedia` (`https://www.w3.org/ns/activitystreams#uploadMedia`)
### uploadMedia
Inspired by <https://www.w3.org/wiki/SocialCG/ActivityPub/MediaUpload>, it is part of the ActivityStreams namespace because it used to be part of the ActivityPub specification and got removed from it.
@ -20,9 +292,8 @@ Content-Type: multipart/form-data
Parameters:
- (required) `file`: The file being uploaded
- (optionnal) `description`: A plain-text description of the media, for accessibility purposes.
- (optional) `description`: A plain-text description of the media, for accessibility purposes.
Response: HTTP 201 Created with the object into the body, no `Location` header provided as it doesn't have an `id`
The object given in the reponse should then be inserted into an Object's `attachment` field.
The object given in the response should then be inserted into an Object's `attachment` field.

View file

@ -0,0 +1,141 @@
# Nodeinfo Extensions
Akkoma currently implements version 2.0 and 2.1 of nodeinfo spec,
but provides the following additional fields.
## metadata
The spec leaves the content of `metadata` up to implementations
and indeed Akkoma adds many fields here apart from the commonly
found `nodeName` and `nodeDescription` fields.
### accountActivationRequired
Whether or not users need to confirm their email before completing registration.
*(boolean)*
!!! note
Not to be confused with account approval, where each registration needs to
be manually approved by an admin. Account approval has no nodeinfo entry.
### features
Array of strings denoting supported server features. E.g. a server supporting
quote posts should include a `"quote_posting"` entry here.
A non-exhaustive list of possible features:
- `polls`
- `quote_posting`
- `editing`
- `bubble_timeline`
- `pleroma_emoji_reactions` *(Unicode emoji)*
- `custom_emoji_reactions`
- `akkoma_api`
- `akkoma:machine_translation`
- `mastodon_api`
- `pleroma_api`
### federatedTimelineAvailable
Whether or not the “federated timeline”, i.e. a timeline containing posts from
the entire known network, is made available.
*(boolean)*
### federation
This section is optional and can contain various custom keys describing federation policies.
The following are required to be presented:
- `enabled` *(boolean)* whether the server federates at all
A non-exhaustive list of optional keys:
- `exclusions` *(boolean)* whether some federation policies are withheld
- `mrf_simple` *(object)* describes how the Simple MRF policy is configured
### fieldsLimits
A JSON object documenting restriction for user account info fields.
All properties are integers.
- `maxFields` maximum number of account info fields local users can create
- `maxRemoteFields` maximum number of account info fields remote users can have
before the user gets rejected or fields truncated
- `nameLength` maximum length of a fields name
- `valueLength` maximum length of a fields value
### invitesEnabled
Whether or not signing up via invite codes is possible.
*(boolean)*
### localBubbleInstances
Array of domains (as strings) of other instances chosen
by the admin which are shown in the bubble timeline.
### mailerEnabled
Whether or not the instance can send out emails.
*(boolean)*
### nodeDescription
Human-friendly description of this instance
*(string)*
### nodeName
Human-friendly name of this instance
*(string)*
### pollLimits
JSON object containing limits for polls created by local users.
All values are integers.
- `max_options` maximum number of poll options
- `max_option_chars` maximum characters per poll option
- `min_expiration` minimum time in seconds a poll must be open for
- `max_expiration` maximum time a poll is allowed to be open for
### postFormats
Array of strings containing media types for supported post source formats.
A non-exhaustive list of possible values:
- `text/plain`
- `text/markdown`
- `text/bbcode`
- `text/x.misskeymarkdown`
### private
Whether or not unauthenticated API access is permitted.
*(boolean)*
### privilegedStaff
Whether or not moderators are trusted to perform some
additional tasks like e.g. issuing password reset emails.
### publicTimelineVisibility
JSON object containing boolean-valued keys reporting
if a given timeline can be viewed without login.
- `local`
- `federated`
- `bubble`
### restrictedNicknames
Array of strings listing nicknames forbidden to be used during signup.
### skipThreadContainment
Whether broken threads are filtered out
*(boolean)*
### staffAccounts
Array containing ActivityPub IDs of local accounts
with some form of elevated privilege on the instance.
### suggestions
JSON object containing info on whether the interaction-based
Mastodon `/api/v1/suggestions` feature is enabled and optionally
additional implementation-defined fields with more details
on e.g. how suggested users are selected.
!!! note
This has no relation to the newer /api/v2/suggestions API
which also (or exclusively) contains staff-curated entries.
- `enabled` *(boolean)* whether or not user recommendations are enabled
### uploadLimits
JSON object documenting various upload-related size limits.
All values are integers and in bytes.
- `avatar` maximum size of uploaded user avatars
- `banner` maximum size of uploaded user profile banners
- `background` maximum size of uploaded user profile backgrounds
- `general` maximum size for all other kinds of uploads

View file

@ -1,6 +1,6 @@
## Required dependencies
* PostgreSQL 9.6+
* PostgreSQL 12+
* Elixir 1.14+ (currently tested up to 1.16)
* Erlang OTP 25+ (currently tested up to OTP26)
* git

View file

@ -60,7 +60,7 @@ ServerTokens Prod
Include /etc/letsencrypt/options-ssl-apache.conf
# Uncomment the following to enable MediaProxy caching on disk
#CacheRoot /tmp/akkoma-media-cache/
#CacheRoot /var/tmp/akkoma-media-cache/
#CacheDirLevels 1
#CacheDirLength 2
#CacheEnable disk /proxy

View file

@ -16,7 +16,7 @@
SCRIPTNAME=${0##*/}
# mod_disk_cache directory
CACHE_DIRECTORY="/tmp/akkoma-media-cache"
CACHE_DIRECTORY="/var/tmp/akkoma-media-cache"
## Removes an item via the htcacheclean utility
## $1 - the filename, can be a pattern .

View file

@ -12,26 +12,22 @@ example.tld {
output file /var/log/caddy/akkoma.log
}
encode gzip
# this is explicitly IPv4 since Pleroma.Web.Endpoint binds on IPv4 only
# and `localhost.` resolves to [::0] on some systems: see issue #930
reverse_proxy 127.0.0.1:4000
# Uncomment if using a separate media subdomain
#@mediaproxy path /media/* /proxy/*
#handle @mediaproxy {
# redir https://media.example.tld{uri} permanent
#}
@mediaproxy path /media/* /proxy/*
handle @mediaproxy {
redir https://media.example.tld{uri} permanent
}
}
# Uncomment if using a separate media subdomain
#media.example.tld {
# @mediaproxy path /media/* /proxy/*
# reverse_proxy @mediaproxy 127.0.0.1:4000 {
# transport http {
# response_header_timeout 10s
# read_timeout 15s
# }
# }
#}
media.example.tld {
@mediaproxy path /media/* /proxy/*
reverse_proxy @mediaproxy 127.0.0.1:4000 {
transport http {
response_header_timeout 10s
read_timeout 15s
}
}
}

View file

@ -3,7 +3,7 @@
# See the documentation at docs.akkoma.dev for your particular distro/OS for
# installation instructions.
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=1g
proxy_cache_path /var/tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=1g
inactive=720m use_temp_path=off;
# this is explicitly IPv4 since Pleroma.Web.Endpoint binds on IPv4 only

View file

@ -5,7 +5,7 @@
SCRIPTNAME=${0##*/}
# NGINX cache directory
CACHE_DIRECTORY="/tmp/akkoma-media-cache"
CACHE_DIRECTORY="/var/tmp/akkoma-media-cache"
## Return the files where the items are cached.
## $1 - the filename, can be a pattern .

View file

@ -16,7 +16,7 @@ defmodule Mix.Pleroma do
:fast_html,
:oban
]
@cachex_children ["object", "user", "scrubber", "web_resp"]
@cachex_children ["object", "user", "scrubber", "web_resp", "http_backoff"]
@doc "Common functions to be reused in mix tasks"
def start_pleroma do
Pleroma.Config.Holder.save_default()
@ -112,18 +112,26 @@ defmodule Mix.Pleroma do
end
end
def shell_info(message) do
def shell_info(message) when is_binary(message) or is_list(message) do
if mix_shell?(),
do: Mix.shell().info(message),
else: IO.puts(message)
end
def shell_error(message) do
def shell_info(message) do
shell_info("#{inspect(message)}")
end
def shell_error(message) when is_binary(message) or is_list(message) do
if mix_shell?(),
do: Mix.shell().error(message),
else: IO.puts(:stderr, message)
end
def shell_error(message) do
shell_error("#{inspect(message)}")
end
@doc "Performs a safe check whether `Mix.shell/0` is available (does not raise if Mix is not loaded)"
def mix_shell?, do: :erlang.function_exported(Mix, :shell, 0)

View file

@ -8,7 +8,6 @@ defmodule Mix.Tasks.Pleroma.Activity do
alias Pleroma.User
alias Pleroma.Web.CommonAPI
alias Pleroma.Pagination
require Logger
import Mix.Pleroma
import Ecto.Query
@ -17,7 +16,7 @@ defmodule Mix.Tasks.Pleroma.Activity do
id
|> Activity.get_by_id()
|> IO.inspect()
|> shell_info()
end
def run(["delete_by_keyword", user, keyword | _rest]) do
@ -35,7 +34,7 @@ defmodule Mix.Tasks.Pleroma.Activity do
)
|> Enum.map(fn x -> CommonAPI.delete(x.id, u) end)
|> Enum.count()
|> IO.puts()
|> shell_info()
end
defp query_with(q, search_query) do

View file

@ -20,6 +20,102 @@ defmodule Mix.Tasks.Pleroma.Database do
@shortdoc "A collection of database related tasks"
@moduledoc File.read!("docs/docs/administration/CLI_tasks/database.md")
defp maybe_limit(query, limit_cnt) do
if is_number(limit_cnt) and limit_cnt > 0 do
limit(query, [], ^limit_cnt)
else
query
end
end
defp limit_statement(limit) when is_number(limit) do
if limit > 0 do
"LIMIT #{limit}"
else
""
end
end
defp prune_orphaned_activities_singles(limit) do
%{:num_rows => del_single} =
"""
delete from public.activities
where id in (
select a.id from public.activities a
left join public.objects o on a.data ->> 'object' = o.data ->> 'id'
left join public.activities a2 on a.data ->> 'object' = a2.data ->> 'id'
left join public.users u on a.data ->> 'object' = u.ap_id
where not a.local
and jsonb_typeof(a."data" -> 'object') = 'string'
and o.id is null
and a2.id is null
and u.id is null
#{limit_statement(limit)}
)
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Prune activity singles: deleted #{del_single} rows...")
del_single
end
defp prune_orphaned_activities_array(limit) do
%{:num_rows => del_array} =
"""
delete from public.activities
where id in (
select a.id from public.activities a
join json_array_elements_text((a."data" -> 'object')::json) as j
on a.data->>'type' = 'Flag'
left join public.objects o on j.value = o.data ->> 'id'
left join public.activities a2 on j.value = a2.data ->> 'id'
left join public.users u on j.value = u.ap_id
group by a.id
having max(o.data ->> 'id') is null
and max(a2.data ->> 'id') is null
and max(u.ap_id) is null
#{limit_statement(limit)}
)
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Prune activity arrays: deleted #{del_array} rows...")
del_array
end
def prune_orphaned_activities(limit \\ 0, opts \\ []) when is_number(limit) do
# Activities can either refer to a single object id, and array of object ids
# or contain an inlined object (at least after going through our normalisation)
#
# Flag is the only type we support with an array (and always has arrays).
# Update the only one with inlined objects.
#
# We already regularly purge old Delete, Undo, Update and Remove and if
# rejected Follow requests anyway; no need to explicitly deal with those here.
#
# Since theres an index on types and there are typically only few Flag
# activites, its _much_ faster to utilise the index. To avoid accidentally
# deleting useful activities should more types be added, keep typeof for singles.
# Prune activities who link to an array of objects
del_array =
if Keyword.get(opts, :arrays, true) do
prune_orphaned_activities_array(limit)
else
0
end
# Prune activities who link to a single object
del_single =
if Keyword.get(opts, :singles, true) do
prune_orphaned_activities_singles(limit)
else
0
end
del_single + del_array
end
def run(["remove_embedded_objects" | args]) do
{options, [], []} =
OptionParser.parse(
@ -62,6 +158,37 @@ defmodule Mix.Tasks.Pleroma.Database do
)
end
def run(["prune_orphaned_activities" | args]) do
{options, [], []} =
OptionParser.parse(
args,
strict: [
limit: :integer,
singles: :boolean,
arrays: :boolean
]
)
start_pleroma()
{limit, options} = Keyword.pop(options, :limit, 0)
log_message = "Pruning orphaned activities"
log_message =
if limit > 0 do
log_message <> ", limiting deletion to #{limit} rows"
else
log_message
end
Logger.info(log_message)
deleted = prune_orphaned_activities(limit, options)
Logger.info("Deleted #{deleted} rows")
end
def run(["prune_objects" | args]) do
{options, [], []} =
OptionParser.parse(
@ -70,7 +197,8 @@ defmodule Mix.Tasks.Pleroma.Database do
vacuum: :boolean,
keep_threads: :boolean,
keep_non_public: :boolean,
prune_orphaned_activities: :boolean
prune_orphaned_activities: :boolean,
limit: :integer
]
)
@ -79,6 +207,8 @@ defmodule Mix.Tasks.Pleroma.Database do
deadline = Pleroma.Config.get([:instance, :remote_post_retention_days])
time_deadline = NaiveDateTime.utc_now() |> NaiveDateTime.add(-(deadline * 86_400))
limit_cnt = Keyword.get(options, :limit, 0)
log_message = "Pruning objects older than #{deadline} days"
log_message =
@ -110,129 +240,124 @@ defmodule Mix.Tasks.Pleroma.Database do
log_message
end
log_message =
if limit_cnt > 0 do
log_message <> ", limiting to #{limit_cnt} rows"
else
log_message
end
Logger.info(log_message)
if Keyword.get(options, :keep_threads) do
# We want to delete objects from threads where
# 1. the newest post is still old
# 2. none of the activities is local
# 3. none of the activities is bookmarked
# 4. optionally none of the posts is non-public
deletable_context =
if Keyword.get(options, :keep_non_public) do
Pleroma.Activity
|> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id)
|> group_by([a], fragment("? ->> 'context'::text", a.data))
|> having(
[a],
not fragment(
# Posts (checked on Create Activity) is non-public
"bool_or((not(?->'to' \\? ? OR ?->'cc' \\? ?)) and ? ->> 'type' = 'Create')",
a.data,
^Pleroma.Constants.as_public(),
a.data,
^Pleroma.Constants.as_public(),
a.data
{del_obj, _} =
if Keyword.get(options, :keep_threads) do
# We want to delete objects from threads where
# 1. the newest post is still old
# 2. none of the activities is local
# 3. none of the activities is bookmarked
# 4. optionally none of the posts is non-public
deletable_context =
if Keyword.get(options, :keep_non_public) do
Pleroma.Activity
|> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id)
|> group_by([a], fragment("? ->> 'context'::text", a.data))
|> having(
[a],
not fragment(
# Posts (checked on Create Activity) is non-public
"bool_or((not(?->'to' \\? ? OR ?->'cc' \\? ?)) and ? ->> 'type' = 'Create')",
a.data,
^Pleroma.Constants.as_public(),
a.data,
^Pleroma.Constants.as_public(),
a.data
)
)
)
else
Pleroma.Activity
|> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id)
|> group_by([a], fragment("? ->> 'context'::text", a.data))
end
|> having([a], max(a.updated_at) < ^time_deadline)
|> having([a], not fragment("bool_or(?)", a.local))
|> having([_, b], fragment("max(?::text) is null", b.id))
|> select([a], fragment("? ->> 'context'::text", a.data))
else
Pleroma.Activity
|> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id)
|> group_by([a], fragment("? ->> 'context'::text", a.data))
end
|> having([a], max(a.updated_at) < ^time_deadline)
|> having([a], not fragment("bool_or(?)", a.local))
|> having([_, b], fragment("max(?::text) is null", b.id))
|> maybe_limit(limit_cnt)
|> select([a], fragment("? ->> 'context'::text", a.data))
Pleroma.Object
|> where([o], fragment("? ->> 'context'::text", o.data) in subquery(deletable_context))
else
if Keyword.get(options, :keep_non_public) do
Pleroma.Object
|> where(
[o],
fragment(
"?->'to' \\? ? OR ?->'cc' \\? ?",
o.data,
^Pleroma.Constants.as_public(),
o.data,
^Pleroma.Constants.as_public()
)
)
|> where([o], fragment("? ->> 'context'::text", o.data) in subquery(deletable_context))
else
deletable =
if Keyword.get(options, :keep_non_public) do
Pleroma.Object
|> where(
[o],
fragment(
"?->'to' \\? ? OR ?->'cc' \\? ?",
o.data,
^Pleroma.Constants.as_public(),
o.data,
^Pleroma.Constants.as_public()
)
)
else
Pleroma.Object
end
|> where([o], o.updated_at < ^time_deadline)
|> where(
[o],
fragment("split_part(?->>'actor', '/', 3) != ?", o.data, ^Pleroma.Web.Endpoint.host())
)
|> maybe_limit(limit_cnt)
|> select([o], o.id)
Pleroma.Object
|> where([o], o.id in subquery(deletable))
end
|> where([o], o.updated_at < ^time_deadline)
|> where(
[o],
fragment("split_part(?->>'actor', '/', 3) != ?", o.data, ^Pleroma.Web.Endpoint.host())
)
end
|> Repo.delete_all(timeout: :infinity)
|> Repo.delete_all(timeout: :infinity)
Logger.info("Deleted #{del_obj} objects...")
if !Keyword.get(options, :keep_threads) do
# Without the --keep-threads option, it's possible that bookmarked
# objects have been deleted. We remove the corresponding bookmarks.
"""
delete from public.bookmarks
where id in (
select b.id from public.bookmarks b
left join public.activities a on b.activity_id = a.id
left join public.objects o on a."data" ->> 'object' = o.data ->> 'id'
where o.id is null
)
"""
|> Repo.query([], timeout: :infinity)
%{:num_rows => del_bookmarks} =
"""
delete from public.bookmarks
where id in (
select b.id from public.bookmarks b
left join public.activities a on b.activity_id = a.id
left join public.objects o on a."data" ->> 'object' = o.data ->> 'id'
where o.id is null
)
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Deleted #{del_bookmarks} orphaned bookmarks...")
end
if Keyword.get(options, :prune_orphaned_activities) do
# Prune activities who link to a single object
"""
delete from public.activities
where id in (
select a.id from public.activities a
left join public.objects o on a.data ->> 'object' = o.data ->> 'id'
left join public.activities a2 on a.data ->> 'object' = a2.data ->> 'id'
left join public.users u on a.data ->> 'object' = u.ap_id
where not a.local
and jsonb_typeof(a."data" -> 'object') = 'string'
and o.id is null
and a2.id is null
and u.id is null
)
"""
|> Repo.query([], timeout: :infinity)
# Prune activities who link to an array of objects
"""
delete from public.activities
where id in (
select a.id from public.activities a
join json_array_elements_text((a."data" -> 'object')::json) as j on jsonb_typeof(a."data" -> 'object') = 'array'
left join public.objects o on j.value = o.data ->> 'id'
left join public.activities a2 on j.value = a2.data ->> 'id'
left join public.users u on j.value = u.ap_id
group by a.id
having max(o.data ->> 'id') is null
and max(a2.data ->> 'id') is null
and max(u.ap_id) is null
)
"""
|> Repo.query([], timeout: :infinity)
del_activities = prune_orphaned_activities()
Logger.info("Deleted #{del_activities} orphaned activities...")
end
"""
DELETE FROM hashtags AS ht
WHERE NOT EXISTS (
SELECT 1 FROM hashtags_objects hto
WHERE ht.id = hto.hashtag_id)
"""
|> Repo.query()
%{:num_rows => del_hashtags} =
"""
DELETE FROM hashtags AS ht
WHERE NOT EXISTS (
SELECT 1 FROM hashtags_objects hto
WHERE ht.id = hto.hashtag_id)
"""
|> Repo.query!()
Logger.info("Deleted #{del_hashtags} no longer used hashtags...")
if Keyword.get(options, :vacuum) do
Logger.info("Starting vacuum...")
Maintenance.vacuum("full")
end
Logger.info("All done!")
end
def run(["prune_task"]) do

View file

@ -3,7 +3,6 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
alias Pleroma.Repo
alias Pleroma.User
require Logger
require Pleroma.Constants
import Mix.Pleroma
@ -14,13 +13,20 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
start_pleroma()
Pleroma.HTTP.get(url)
|> shell_info()
end
def run(["fetch_object", url]) do
start_pleroma()
Pleroma.Object.Fetcher.fetch_object_from_id(url)
|> IO.inspect()
end
def run(["home_timeline", nickname]) do
start_pleroma()
user = Repo.get_by!(User, nickname: nickname)
Logger.info("Home timeline query #{user.nickname}")
shell_info("Home timeline query #{user.nickname}")
followed_hashtags =
user
@ -49,14 +55,14 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts()
|> shell_info()
end
def run(["user_timeline", nickname, reading_nickname]) do
start_pleroma()
user = Repo.get_by!(User, nickname: nickname)
reading_user = Repo.get_by!(User, nickname: reading_nickname)
Logger.info("User timeline query #{user.nickname}")
shell_info("User timeline query #{user.nickname}")
params =
%{limit: 20}
@ -80,7 +86,7 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts()
|> shell_info()
end
def run(["notifications", nickname]) do
@ -96,7 +102,7 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts()
|> shell_info()
end
def run(["known_network", nickname]) do
@ -122,6 +128,6 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts()
|> shell_info()
end
end

View file

@ -27,11 +27,11 @@ defmodule Mix.Tasks.Pleroma.Emoji do
]
for {param, value} <- to_print do
IO.puts(IO.ANSI.format([:bright, param, :normal, ": ", value]))
shell_info(IO.ANSI.format([:bright, param, :normal, ": ", value]))
end
# A newline
IO.puts("")
shell_info("")
end)
end
@ -49,7 +49,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
pack = manifest[pack_name]
src = pack["src"]
IO.puts(
shell_info(
IO.ANSI.format([
"Downloading ",
:bright,
@ -67,9 +67,9 @@ defmodule Mix.Tasks.Pleroma.Emoji do
sha_status_text = ["SHA256 of ", :bright, pack_name, :normal, " source file is ", :bright]
if archive_sha == String.upcase(pack["src_sha256"]) do
IO.puts(IO.ANSI.format(sha_status_text ++ [:green, "OK"]))
shell_info(IO.ANSI.format(sha_status_text ++ [:green, "OK"]))
else
IO.puts(IO.ANSI.format(sha_status_text ++ [:red, "BAD"]))
shell_info(IO.ANSI.format(sha_status_text ++ [:red, "BAD"]))
raise "Bad SHA256 for #{pack_name}"
end
@ -80,7 +80,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
|> Path.dirname()
|> Path.join(pack["files"])
IO.puts(
shell_info(
IO.ANSI.format([
"Fetching the file list for ",
:bright,
@ -94,7 +94,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
files = fetch_and_decode!(files_loc)
IO.puts(IO.ANSI.format(["Unpacking ", :bright, pack_name]))
shell_info(IO.ANSI.format(["Unpacking ", :bright, pack_name]))
pack_path =
Path.join([
@ -115,7 +115,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
file_list: files_to_unzip
)
IO.puts(IO.ANSI.format(["Writing pack.json for ", :bright, pack_name]))
shell_info(IO.ANSI.format(["Writing pack.json for ", :bright, pack_name]))
pack_json = %{
pack: %{
@ -132,7 +132,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
File.write!(Path.join(pack_path, "pack.json"), Jason.encode!(pack_json, pretty: true))
Pleroma.Emoji.reload()
else
IO.puts(IO.ANSI.format([:bright, :red, "No pack named \"#{pack_name}\" found"]))
shell_info(IO.ANSI.format([:bright, :red, "No pack named \"#{pack_name}\" found"]))
end
end
end
@ -180,14 +180,14 @@ defmodule Mix.Tasks.Pleroma.Emoji do
custom_exts
end
IO.puts("Using #{Enum.join(exts, " ")} extensions")
shell_info("Using #{Enum.join(exts, " ")} extensions")
IO.puts("Downloading the pack and generating SHA256")
shell_info("Downloading the pack and generating SHA256")
{:ok, %{body: binary_archive}} = Pleroma.HTTP.get(src)
archive_sha = :crypto.hash(:sha256, binary_archive) |> Base.encode16()
IO.puts("SHA256 is #{archive_sha}")
shell_info("SHA256 is #{archive_sha}")
pack_json = %{
name => %{
@ -208,7 +208,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
File.write!(files_name, Jason.encode!(emoji_map, pretty: true))
IO.puts("""
shell_info("""
#{files_name} has been created and contains the list of all found emojis in the pack.
Please review the files in the pack and remove those not needed.
@ -230,11 +230,11 @@ defmodule Mix.Tasks.Pleroma.Emoji do
)
)
IO.puts("#{pack_file} has been updated with the #{name} pack")
shell_info("#{pack_file} has been updated with the #{name} pack")
else
File.write!(pack_file, Jason.encode!(pack_json, pretty: true))
IO.puts("#{pack_file} has been created with the #{name} pack")
shell_info("#{pack_file} has been created with the #{name} pack")
end
Pleroma.Emoji.reload()
@ -243,7 +243,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
def run(["reload"]) do
start_pleroma()
Pleroma.Emoji.reload()
IO.puts("Emoji packs have been reloaded.")
shell_info("Emoji packs have been reloaded.")
end
defp fetch_and_decode!(from) do

View file

@ -11,7 +11,6 @@ defmodule Mix.Tasks.Pleroma.RefreshCounterCache do
alias Pleroma.CounterCache
alias Pleroma.Repo
require Logger
import Ecto.Query
def run([]) do

View file

@ -48,7 +48,7 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
]
)
IO.puts("Created indices. Starting to insert posts.")
shell_info("Created indices. Starting to insert posts.")
chunk_size = Pleroma.Config.get([Pleroma.Search.Meilisearch, :initial_indexing_chunk_size])
@ -65,7 +65,7 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
)
count = query |> Pleroma.Repo.aggregate(:count, :data)
IO.puts("Entries to index: #{count}")
shell_info("Entries to index: #{count}")
Pleroma.Repo.stream(
query,
@ -92,10 +92,10 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
with {:ok, res} <- result do
if not Map.has_key?(res, "indexUid") do
IO.puts("\nFailed to index: #{inspect(result)}")
shell_info("\nFailed to index: #{inspect(result)}")
end
else
e -> IO.puts("\nFailed to index due to network error: #{inspect(e)}")
e -> shell_error("\nFailed to index due to network error: #{inspect(e)}")
end
end)
|> Stream.run()
@ -126,11 +126,15 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
decoded = Jason.decode!(result.body)
if decoded["results"] do
Enum.each(decoded["results"], fn %{"description" => desc, "key" => key} ->
IO.puts("#{desc}: #{key}")
Enum.each(decoded["results"], fn
%{"name" => name, "key" => key} ->
shell_info("#{name}: #{key}")
%{"description" => desc, "key" => key} ->
shell_info("#{desc}: #{key}")
end)
else
IO.puts("Error fetching the keys, check the master key is correct: #{inspect(decoded)}")
shell_error("Error fetching the keys, check the master key is correct: #{inspect(decoded)}")
end
end
@ -138,7 +142,7 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
start_pleroma()
{:ok, result} = meili_get("/indexes/objects/stats")
IO.puts("Number of entries: #{result["numberOfDocuments"]}")
IO.puts("Indexing? #{result["isIndexing"]}")
shell_info("Number of entries: #{result["numberOfDocuments"]}")
shell_info("Indexing? #{result["isIndexing"]}")
end
end

View file

@ -38,7 +38,7 @@ defmodule Mix.Tasks.Pleroma.Security do
Logger.put_process_level(self(), :notice)
start_pleroma()
IO.puts("""
shell_info("""
+------------------------+
| SPOOF SEARCH UPLOADS |
+------------------------+
@ -55,7 +55,7 @@ defmodule Mix.Tasks.Pleroma.Security do
Logger.put_process_level(self(), :notice)
start_pleroma()
IO.puts("""
shell_info("""
+----------------------+
| SPOOF SEARCH NOTES |
+----------------------+
@ -77,7 +77,7 @@ defmodule Mix.Tasks.Pleroma.Security do
uploads_search_spoofs_local_dir(Config.get!([Pleroma.Uploaders.Local, :uploads]))
_ ->
IO.puts("""
shell_info("""
NOTE:
Not using local uploader; thus not affected by this exploit.
It's impossible to check for files, but in case local uploader was used before
@ -98,13 +98,13 @@ defmodule Mix.Tasks.Pleroma.Security do
orphaned_attachs = upload_search_orphaned_attachments(not_orphaned_urls)
IO.puts("\nSearch concluded; here are the results:")
shell_info("\nSearch concluded; here are the results:")
pretty_print_list_with_title(emoji, "Emoji")
pretty_print_list_with_title(files, "Uploaded Files")
pretty_print_list_with_title(post_attachs, "(Not Deleted) Post Attachments")
pretty_print_list_with_title(orphaned_attachs, "Orphaned Uploads")
IO.puts("""
shell_info("""
In total found
#{length(emoji)} emoji
#{length(files)} uploads
@ -116,7 +116,7 @@ defmodule Mix.Tasks.Pleroma.Security do
defp uploads_search_spoofs_local_dir(dir) do
local_dir = String.replace_suffix(dir, "/", "")
IO.puts("Searching for suspicious files in #{local_dir}...")
shell_info("Searching for suspicious files in #{local_dir}...")
glob_ext = "{" <> Enum.join(@activity_exts, ",") <> "}"
@ -128,7 +128,7 @@ defmodule Mix.Tasks.Pleroma.Security do
end
defp uploads_search_spoofs_notes() do
IO.puts("Now querying DB for posts with spoofing attachments. This might take a while...")
shell_info("Now querying DB for posts with spoofing attachments. This might take a while...")
patterns = [local_id_pattern() | activity_ext_url_patterns()]
@ -153,7 +153,7 @@ defmodule Mix.Tasks.Pleroma.Security do
end
defp upload_search_orphaned_attachments(not_orphaned_urls) do
IO.puts("""
shell_info("""
Now querying DB for orphaned spoofing attachment (i.e. their post was deleted,
but if :cleanup_attachments was not enabled traces remain in the database)
This might take a bit...
@ -184,7 +184,7 @@ defmodule Mix.Tasks.Pleroma.Security do
# | S P O O F - I N S E R T E D |
# +-----------------------------+
defp do_spoof_inserted() do
IO.puts("""
shell_info("""
Searching for local posts whose Create activity has no ActivityPub id...
This is a pretty good indicator, but only for spoofs of local actors
and only if the spoofing happened after around late 2021.
@ -194,9 +194,9 @@ defmodule Mix.Tasks.Pleroma.Security do
search_local_notes_without_create_id()
|> Enum.sort()
IO.puts("Done.\n")
shell_info("Done.\n")
IO.puts("""
shell_info("""
Now trying to weed out other poorly hidden spoofs.
This can't detect all and may have some false positives.
""")
@ -207,9 +207,9 @@ defmodule Mix.Tasks.Pleroma.Security do
search_sus_notes_by_id_patterns()
|> Enum.filter(fn r -> !(r in likely_spoofed_posts_set) end)
IO.puts("Done.\n")
shell_info("Done.\n")
IO.puts("""
shell_info("""
Finally, searching for spoofed, local user accounts.
(It's impossible to detect spoofed remote users)
""")
@ -220,7 +220,7 @@ defmodule Mix.Tasks.Pleroma.Security do
pretty_print_list_with_title(idless_create, "Likely Spoofed Posts")
pretty_print_list_with_title(spoofed_users, "Spoofed local user accounts")
IO.puts("""
shell_info("""
In total found:
#{length(spoofed_users)} bogus users
#{length(idless_create)} likely spoofed posts
@ -289,27 +289,27 @@ defmodule Mix.Tasks.Pleroma.Security do
defp pretty_print_list_with_title(list, title) do
title_len = String.length(title)
title_underline = String.duplicate("=", title_len)
IO.puts(title)
IO.puts(title_underline)
shell_info(title)
shell_info(title_underline)
pretty_print_list(list)
end
defp pretty_print_list([]), do: IO.puts("")
defp pretty_print_list([]), do: shell_info("")
defp pretty_print_list([{a, o} | rest])
when (is_binary(a) or is_number(a)) and is_binary(o) do
IO.puts(" {#{a}, #{o}}")
shell_info(" {#{a}, #{o}}")
pretty_print_list(rest)
end
defp pretty_print_list([{u, a, o} | rest])
when is_binary(a) and is_binary(u) and is_binary(o) do
IO.puts(" {#{u}, #{a}, #{o}}")
shell_info(" {#{u}, #{a}, #{o}}")
pretty_print_list(rest)
end
defp pretty_print_list([e | rest]) when is_binary(e) do
IO.puts(" #{e}")
shell_info(" #{e}")
pretty_print_list(rest)
end

View file

@ -114,7 +114,7 @@ defmodule Mix.Tasks.Pleroma.User do
{:ok, token} <- Pleroma.PasswordResetToken.create_token(user) do
shell_info("Generated password reset token for #{user.nickname}")
IO.puts("URL: #{~p[/api/v1/pleroma/password_reset/#{token.token}]}")
shell_info("URL: #{~p[/api/v1/pleroma/password_reset/#{token.token}]}")
else
_ ->
shell_error("No local user #{nickname}")
@ -301,7 +301,7 @@ defmodule Mix.Tasks.Pleroma.User do
shell_info("Generated user invite token " <> String.replace(invite.invite_type, "_", " "))
url = url(~p[/registration/#{invite.token}])
IO.puts(url)
shell_info(url)
else
error ->
shell_error("Could not create invite token: #{inspect(error)}")
@ -373,7 +373,7 @@ defmodule Mix.Tasks.Pleroma.User do
nickname
|> User.get_cached_by_nickname()
shell_info("#{inspect(user)}")
shell_info(user)
end
def run(["send_confirmation", nickname]) do
@ -457,7 +457,7 @@ defmodule Mix.Tasks.Pleroma.User do
with %User{local: true} = user <- User.get_cached_by_nickname(nickname) do
blocks = User.following_ap_ids(user)
IO.puts("#{inspect(blocks)}")
shell_info(blocks)
end
end
@ -516,12 +516,12 @@ defmodule Mix.Tasks.Pleroma.User do
{:follow_data, Pleroma.Web.ActivityPub.Utils.fetch_latest_follow(local, remote)} do
calculated_state = User.following?(local, remote)
IO.puts(
shell_info(
"Request state is #{request_state}, vs calculated state of following=#{calculated_state}"
)
if calculated_state == false && request_state == "accept" do
IO.puts("Discrepancy found, fixing")
shell_info("Discrepancy found, fixing")
Pleroma.Web.CommonAPI.reject_follow_request(local, remote)
shell_info("Relationship fixed")
else
@ -551,14 +551,14 @@ defmodule Mix.Tasks.Pleroma.User do
|> Stream.each(fn users ->
users
|> Enum.each(fn user ->
IO.puts("Re-Resolving: #{user.ap_id}")
shell_info("Re-Resolving: #{user.ap_id}")
with {:ok, user} <- Pleroma.User.fetch_by_ap_id(user.ap_id),
changeset <- Pleroma.User.update_changeset(user),
{:ok, _user} <- Pleroma.User.update_and_set_cache(changeset) do
:ok
else
error -> IO.puts("Could not resolve: #{user.ap_id}, #{inspect(error)}")
error -> shell_info("Could not resolve: #{user.ap_id}, #{inspect(error)}")
end
end)
end)

View file

@ -258,6 +258,27 @@ defmodule Pleroma.Activity do
def get_create_by_object_ap_id(_), do: nil
@doc """
Accepts a list of `ap__id`.
Returns a query yielding Create activities for the given objects,
in the same order as they were specified in the input list.
"""
@spec get_presorted_create_by_object_ap_id([String.t()]) :: Ecto.Queryable.t()
def get_presorted_create_by_object_ap_id(ap_ids) do
from(
a in Activity,
join:
ids in fragment(
"SELECT * FROM UNNEST(?::text[]) WITH ORDINALITY AS ids(ap_id, ord)",
^ap_ids
),
on:
ids.ap_id == fragment("?->>'object'", a.data) and
fragment("?->>'type'", a.data) == "Create",
order_by: [asc: ids.ord]
)
end
@doc """
Accepts `ap_id` or list of `ap_id`.
Returns a query.

View file

@ -28,7 +28,7 @@ defmodule Pleroma.Activity.HTML do
end
end
defp add_cache_key_for(activity_id, additional_key) do
def add_cache_key_for(activity_id, additional_key) do
current = get_cache_keys_for(activity_id)
unless additional_key in current do

View file

@ -95,34 +95,17 @@ defmodule Pleroma.Application do
opts = [strategy: :one_for_one, name: Pleroma.Supervisor, max_restarts: max_restarts]
with {:ok, data} <- Supervisor.start_link(children, opts) do
set_postgres_server_version()
{:ok, data}
else
case Supervisor.start_link(children, opts) do
{:ok, data} ->
{:ok, data}
e ->
Logger.error("Failed to start!")
Logger.error("#{inspect(e)}")
Logger.critical("Failed to start!")
Logger.critical("#{inspect(e)}")
e
end
end
defp set_postgres_server_version do
version =
with %{rows: [[version]]} <- Ecto.Adapters.SQL.query!(Pleroma.Repo, "show server_version"),
{num, _} <- Float.parse(version) do
num
else
e ->
Logger.warning(
"Could not get the postgres version: #{inspect(e)}.\nSetting the default value of 9.6"
)
9.6
end
:persistent_term.put({Pleroma.Repo, :postgres_version}, version)
end
def load_custom_modules do
dir = Config.get([:modules, :runtime_dir])
@ -179,7 +162,9 @@ defmodule Pleroma.Application do
build_cachex("translations", default_ttl: :timer.hours(24 * 30), limit: 2500),
build_cachex("instances", default_ttl: :timer.hours(24), ttl_interval: 1000, limit: 2500),
build_cachex("request_signatures", default_ttl: :timer.hours(24 * 30), limit: 3000),
build_cachex("rel_me", default_ttl: :timer.hours(24 * 30), limit: 300)
build_cachex("rel_me", default_ttl: :timer.hours(24 * 30), limit: 300),
build_cachex("host_meta", default_ttl: :timer.minutes(120), limit: 5000),
build_cachex("http_backoff", default_ttl: :timer.hours(24 * 30), limit: 10000)
]
end
@ -279,7 +264,9 @@ defmodule Pleroma.Application do
defp http_children do
proxy_url = Config.get([:http, :proxy_url])
proxy = Pleroma.HTTP.AdapterHelper.format_proxy(proxy_url)
pool_size = Config.get([:http, :pool_size])
pool_size = Config.get([:http, :pool_size], 10)
pool_timeout = Config.get([:http, :pool_timeout], 60_000)
connection_timeout = Config.get([:http, :conn_max_idle_time], 10_000)
:public_key.cacerts_load()
@ -289,6 +276,8 @@ defmodule Pleroma.Application do
|> Pleroma.HTTP.AdapterHelper.add_pool_size(pool_size)
|> Pleroma.HTTP.AdapterHelper.maybe_add_proxy_pool(proxy)
|> Pleroma.HTTP.AdapterHelper.ensure_ipv6()
|> Pleroma.HTTP.AdapterHelper.add_default_conn_max_idle_time(connection_timeout)
|> Pleroma.HTTP.AdapterHelper.add_default_pool_max_idle_time(pool_timeout)
|> Keyword.put(:name, MyFinch)
[{Finch, config}]

View file

@ -24,7 +24,6 @@ defmodule Pleroma.Config.TransferTask do
defp reboot_time_subkeys,
do: [
{:pleroma, Pleroma.Captcha, [:seconds_valid]},
{:pleroma, Pleroma.Upload, [:proxy_remote]},
{:pleroma, :instance, [:upload_limit]},
{:pleroma, :http, [:pool_size]},
{:pleroma, :http, [:proxy_url]}

View file

@ -25,7 +25,7 @@ defmodule Pleroma.Constants do
const(static_only_files,
do:
~w(index.html robots.txt static static-fe finmoji emoji packs sounds images instance embed sw.js sw-pleroma.js favicon.png schemas doc assets)
~w(index.html robots.txt static static-fe finmoji emoji packs sounds images instance embed sw.js sw-pleroma.js favicon.png schemas doc)
)
const(status_updatable_fields,
@ -64,4 +64,7 @@ defmodule Pleroma.Constants do
"Service"
]
)
# Internally used as top-level types for media attachments and user images
const(attachment_types, do: ["Document", "Image"])
end

View file

@ -6,8 +6,6 @@ defmodule Pleroma.HTML do
# Scrubbers are compiled on boot so they can be configured in OTP releases
# @on_load :compile_scrubbers
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
def compile_scrubbers do
dir = Path.join(:code.priv_dir(:pleroma), "scrubbers")
@ -67,22 +65,9 @@ defmodule Pleroma.HTML do
end
end
def extract_first_external_url_from_object(%{data: %{"content" => content}} = object)
@spec extract_first_external_url_from_object(Pleroma.Object.t()) :: String.t() | nil
def extract_first_external_url_from_object(%{data: %{"content" => content}})
when is_binary(content) do
unless object.data["fake"] do
key = "URL|#{object.id}"
@cachex.fetch!(:scrubber_cache, key, fn _key ->
{:commit, {:ok, extract_first_external_url(content)}}
end)
else
{:ok, extract_first_external_url(content)}
end
end
def extract_first_external_url_from_object(_), do: {:error, :no_content}
def extract_first_external_url(content) do
content
|> Floki.parse_fragment!()
|> Floki.find("a:not(.mention,.hashtag,.attachment,[rel~=\"tag\"])")
@ -90,4 +75,6 @@ defmodule Pleroma.HTML do
|> Floki.attribute("href")
|> Enum.at(0)
end
def extract_first_external_url_from_object(_), do: nil
end

View file

@ -74,7 +74,12 @@ defmodule Pleroma.HTTP do
request = build_request(method, headers, options, url, body, params)
client = Tesla.client([Tesla.Middleware.FollowRedirects, Tesla.Middleware.Telemetry])
Logger.debug("Outbound: #{method} #{url}")
request(client, request)
rescue
e ->
Logger.error("Failed to fetch #{url}: #{inspect(e)}")
{:error, :fetch_error}
end
@spec request(Client.t(), keyword()) :: {:ok, Env.t()} | {:error, any()}

View file

@ -116,6 +116,20 @@ defmodule Pleroma.HTTP.AdapterHelper do
put_in(opts, [:pools, :default, :conn_opts, :transport_opts, :inet6], true)
end
def add_default_pool_max_idle_time(opts, pool_timeout) do
opts
|> maybe_add_pools()
|> maybe_add_default_pool()
|> put_in([:pools, :default, :pool_max_idle_time], pool_timeout)
end
def add_default_conn_max_idle_time(opts, connection_timeout) do
opts
|> maybe_add_pools()
|> maybe_add_default_pool()
|> put_in([:pools, :default, :conn_max_idle_time], connection_timeout)
end
@doc """
Merge default connection & adapter options with received ones.
"""

121
lib/pleroma/http/backoff.ex Normal file
View file

@ -0,0 +1,121 @@
defmodule Pleroma.HTTP.Backoff do
alias Pleroma.HTTP
require Logger
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
@backoff_cache :http_backoff_cache
# attempt to parse a timestamp from a header
# returns nil if it can't parse the timestamp
@spec timestamp_or_nil(binary) :: DateTime.t() | nil
defp timestamp_or_nil(header) do
case DateTime.from_iso8601(header) do
{:ok, stamp, _} ->
stamp
_ ->
nil
end
end
# attempt to parse the x-ratelimit-reset header from the headers
@spec x_ratelimit_reset(headers :: list) :: DateTime.t() | nil
defp x_ratelimit_reset(headers) do
with {_header, value} <- List.keyfind(headers, "x-ratelimit-reset", 0),
true <- is_binary(value) do
timestamp_or_nil(value)
else
_ ->
nil
end
end
# attempt to parse the Retry-After header from the headers
# this can be either a timestamp _or_ a number of seconds to wait!
# we'll return a datetime if we can parse it, or nil if we can't
@spec retry_after(headers :: list) :: DateTime.t() | nil
defp retry_after(headers) do
with {_header, value} <- List.keyfind(headers, "retry-after", 0),
true <- is_binary(value) do
# first, see if it's an integer
case Integer.parse(value) do
{seconds, ""} ->
Logger.debug("Parsed Retry-After header: #{seconds} seconds")
DateTime.utc_now() |> Timex.shift(seconds: seconds)
_ ->
# if it's not an integer, try to parse it as a timestamp
timestamp_or_nil(value)
end
else
_ ->
nil
end
end
# given a set of headers, will attempt to find the next backoff timestamp
# if it can't find one, it will default to 5 minutes from now
@spec next_backoff_timestamp(%{headers: list}) :: DateTime.t()
defp next_backoff_timestamp(%{headers: headers}) when is_list(headers) do
default_5_minute_backoff =
DateTime.utc_now()
|> Timex.shift(seconds: 5 * 60)
backoff =
[&x_ratelimit_reset/1, &retry_after/1]
|> Enum.map(& &1.(headers))
|> Enum.find(&(&1 != nil))
if is_nil(backoff) do
Logger.debug("No backoff headers found, defaulting to 5 minutes from now")
default_5_minute_backoff
else
Logger.debug("Found backoff header, will back off until: #{backoff}")
backoff
end
end
defp next_backoff_timestamp(_), do: DateTime.utc_now() |> Timex.shift(seconds: 5 * 60)
# utility function to check the HTTP response for potential backoff headers
# will check if we get a 429 or 503 response, and if we do, will back off for a bit
@spec check_backoff({:ok | :error, HTTP.Env.t()}, binary()) ::
{:ok | :error, HTTP.Env.t()} | {:error, :ratelimit}
defp check_backoff({:ok, env}, host) do
case env.status do
status when status in [429, 503] ->
Logger.error("Rate limited on #{host}! Backing off...")
timestamp = next_backoff_timestamp(env)
ttl = Timex.diff(timestamp, DateTime.utc_now(), :seconds)
# we will cache the host for 5 minutes
@cachex.put(@backoff_cache, host, true, ttl: ttl)
{:error, :ratelimit}
_ ->
{:ok, env}
end
end
defp check_backoff(env, _), do: env
@doc """
this acts as a single throughput for all GET requests
we will check if the host is in the cache, and if it is, we will automatically fail the request
this ensures that we don't hammer the server with requests, and instead wait for the backoff to expire
this is a very simple implementation, and can be improved upon!
"""
@spec get(binary, list, list) :: {:ok | :error, HTTP.Env.t()} | {:error, :ratelimit}
def get(url, headers \\ [], options \\ []) do
%{host: host} = URI.parse(url)
case @cachex.get(@backoff_cache, host) do
{:ok, nil} ->
url
|> HTTP.get(headers, options)
|> check_backoff(host)
_ ->
{:error, :ratelimit}
end
end
end

View file

@ -354,7 +354,7 @@ defmodule Pleroma.Object.Fetcher do
with {:ok, %{body: body, status: code, headers: headers, url: final_url}}
when code in 200..299 <-
HTTP.get(id, headers),
HTTP.Backoff.get(id, headers),
remote_host <-
URI.parse(final_url).host,
{:cross_domain_redirect, false} <-

View file

@ -28,7 +28,7 @@ defmodule Pleroma.ScheduledActivity do
timestamps()
end
def changeset(%ScheduledActivity{} = scheduled_activity, attrs) do
defp changeset(%ScheduledActivity{} = scheduled_activity, attrs) do
scheduled_activity
|> cast(attrs, [:scheduled_at, :params])
|> validate_required([:scheduled_at, :params])
@ -40,26 +40,36 @@ defmodule Pleroma.ScheduledActivity do
%{changes: %{params: %{"media_ids" => media_ids} = params}} = changeset
)
when is_list(media_ids) do
media_attachments = Utils.attachments_from_ids(%{media_ids: media_ids})
user = User.get_by_id(changeset.data.user_id)
params =
params
|> Map.put("media_attachments", media_attachments)
|> Map.put("media_ids", media_ids)
case Utils.attachments_from_ids(user, %{media_ids: media_ids}) do
media_attachments when is_list(media_attachments) ->
params =
params
|> Map.put("media_attachments", media_attachments)
|> Map.put("media_ids", media_ids)
put_change(changeset, :params, params)
put_change(changeset, :params, params)
{:error, _} = e ->
e
e ->
{:error, e}
end
end
defp with_media_attachments(changeset), do: changeset
def update_changeset(%ScheduledActivity{} = scheduled_activity, attrs) do
defp update_changeset(%ScheduledActivity{} = scheduled_activity, attrs) do
# note: should this ever allow swapping media attachments, make sure ownership is checked
scheduled_activity
|> cast(attrs, [:scheduled_at])
|> validate_required([:scheduled_at])
|> validate_scheduled_at()
end
def validate_scheduled_at(changeset) do
defp validate_scheduled_at(changeset) do
validate_change(changeset, :scheduled_at, fn _, scheduled_at ->
cond do
not far_enough?(scheduled_at) ->
@ -77,7 +87,7 @@ defmodule Pleroma.ScheduledActivity do
end)
end
def exceeds_daily_user_limit?(user_id, scheduled_at) do
defp exceeds_daily_user_limit?(user_id, scheduled_at) do
ScheduledActivity
|> where(user_id: ^user_id)
|> where([sa], type(sa.scheduled_at, :date) == type(^scheduled_at, :date))
@ -86,7 +96,7 @@ defmodule Pleroma.ScheduledActivity do
|> Kernel.>=(Config.get([ScheduledActivity, :daily_user_limit]))
end
def exceeds_total_user_limit?(user_id) do
defp exceeds_total_user_limit?(user_id) do
ScheduledActivity
|> where(user_id: ^user_id)
|> select([sa], count(sa.id))
@ -108,20 +118,29 @@ defmodule Pleroma.ScheduledActivity do
diff > @min_offset
end
def new(%User{} = user, attrs) do
defp new(%User{} = user, attrs) do
changeset(%ScheduledActivity{user_id: user.id}, attrs)
end
@doc """
Creates ScheduledActivity and add to queue to perform at scheduled_at date
"""
@spec create(User.t(), map()) :: {:ok, ScheduledActivity.t()} | {:error, Ecto.Changeset.t()}
@spec create(User.t(), map()) :: {:ok, ScheduledActivity.t()} | {:error, any()}
def create(%User{} = user, attrs) do
Multi.new()
|> Multi.insert(:scheduled_activity, new(user, attrs))
|> maybe_add_jobs(Config.get([ScheduledActivity, :enabled]))
|> Repo.transaction()
|> transaction_response
case new(user, attrs) do
%Ecto.Changeset{} = sched_data ->
Multi.new()
|> Multi.insert(:scheduled_activity, sched_data)
|> maybe_add_jobs(Config.get([ScheduledActivity, :enabled]))
|> Repo.transaction()
|> transaction_response
{:error, _} = e ->
e
e ->
{:error, e}
end
end
defp maybe_add_jobs(multi, true) do
@ -187,17 +206,7 @@ defmodule Pleroma.ScheduledActivity do
|> where(user_id: ^user.id)
end
def due_activities(offset \\ 0) do
naive_datetime =
NaiveDateTime.utc_now()
|> NaiveDateTime.add(offset, :millisecond)
ScheduledActivity
|> where([sa], sa.scheduled_at < ^naive_datetime)
|> Repo.all()
end
def job_query(scheduled_activity_id) do
defp job_query(scheduled_activity_id) do
from(j in Oban.Job,
where: j.queue == "scheduled_activities",
where: fragment("args ->> 'activity_id' = ?::text", ^to_string(scheduled_activity_id))

View file

@ -21,19 +21,12 @@ defmodule Pleroma.Search.DatabaseSearch do
offset = Keyword.get(options, :offset, 0)
author = Keyword.get(options, :author)
search_function =
if :persistent_term.get({Pleroma.Repo, :postgres_version}) >= 11 do
:websearch
else
:plain
end
try do
Activity
|> Activity.with_preloaded_object()
|> Activity.restrict_deactivated_users()
|> restrict_public()
|> query_with(index_type, search_query, search_function)
|> query_with(index_type, search_query)
|> maybe_restrict_local(user)
|> maybe_restrict_author(author)
|> maybe_restrict_blocked(user)
@ -72,25 +65,7 @@ defmodule Pleroma.Search.DatabaseSearch do
)
end
defp query_with(q, :gin, search_query, :plain) do
%{rows: [[tsc]]} =
Ecto.Adapters.SQL.query!(
Pleroma.Repo,
"select current_setting('default_text_search_config')::regconfig::oid;"
)
from([a, o] in q,
where:
fragment(
"to_tsvector(?::oid::regconfig, ?->>'content') @@ plainto_tsquery(?)",
^tsc,
o.data,
^search_query
)
)
end
defp query_with(q, :gin, search_query, :websearch) do
defp query_with(q, :gin, search_query) do
%{rows: [[tsc]]} =
Ecto.Adapters.SQL.query!(
Pleroma.Repo,
@ -108,19 +83,7 @@ defmodule Pleroma.Search.DatabaseSearch do
)
end
defp query_with(q, :rum, search_query, :plain) do
from([a, o] in q,
where:
fragment(
"? @@ plainto_tsquery(?)",
o.fts_content,
^search_query
),
order_by: [fragment("? <=> now()::date", o.inserted_at)]
)
end
defp query_with(q, :rum, search_query, :websearch) do
defp query_with(q, :rum, search_query) do
from([a, o] in q,
where:
fragment(

View file

@ -5,15 +5,27 @@ defmodule Pleroma.Search.Meilisearch do
alias Pleroma.Activity
import Pleroma.Search.DatabaseSearch
import Ecto.Query
@behaviour Pleroma.Search.SearchBackend
defp meili_headers do
private_key = Pleroma.Config.get([Pleroma.Search.Meilisearch, :private_key])
defp meili_headers(key) do
key_header =
if is_nil(key), do: [], else: [{"Authorization", "Bearer #{key}"}]
[{"Content-Type", "application/json"}] ++
if is_nil(private_key), do: [], else: [{"Authorization", "Bearer #{private_key}"}]
[{"Content-Type", "application/json"} | key_header]
end
defp meili_headers_admin do
private_key = Pleroma.Config.get([Pleroma.Search.Meilisearch, :private_key])
meili_headers(private_key)
end
defp meili_headers_search do
search_key =
Pleroma.Config.get([Pleroma.Search.Meilisearch, :search_key]) ||
Pleroma.Config.get([Pleroma.Search.Meilisearch, :private_key])
meili_headers(search_key)
end
def meili_get(path) do
@ -22,7 +34,7 @@ defmodule Pleroma.Search.Meilisearch do
result =
Pleroma.HTTP.get(
Path.join(endpoint, path),
meili_headers()
meili_headers_admin()
)
with {:ok, res} <- result do
@ -30,14 +42,14 @@ defmodule Pleroma.Search.Meilisearch do
end
end
def meili_post(path, params) do
defp meili_search(params) do
endpoint = Pleroma.Config.get([Pleroma.Search.Meilisearch, :url])
result =
Pleroma.HTTP.post(
Path.join(endpoint, path),
Path.join(endpoint, "/indexes/objects/search"),
Jason.encode!(params),
meili_headers()
meili_headers_search()
)
with {:ok, res} <- result do
@ -53,7 +65,7 @@ defmodule Pleroma.Search.Meilisearch do
:put,
Path.join(endpoint, path),
Jason.encode!(params),
meili_headers(),
meili_headers_admin(),
[]
)
@ -70,7 +82,7 @@ defmodule Pleroma.Search.Meilisearch do
:delete,
Path.join(endpoint, path),
"",
meili_headers(),
meili_headers_admin(),
[]
)
end
@ -81,25 +93,20 @@ defmodule Pleroma.Search.Meilisearch do
author = Keyword.get(options, :author)
res =
meili_post(
"/indexes/objects/search",
%{q: query, offset: offset, limit: limit}
)
meili_search(%{q: query, offset: offset, limit: limit})
with {:ok, result} <- res do
hits = result["hits"] |> Enum.map(& &1["ap"])
try do
hits
|> Activity.create_by_object_ap_id()
|> Activity.with_preloaded_object()
|> Activity.get_presorted_create_by_object_ap_id()
|> Activity.with_preloaded_object()
|> Activity.restrict_deactivated_users()
|> maybe_restrict_local(user)
|> maybe_restrict_author(author)
|> maybe_restrict_blocked(user)
|> maybe_fetch(user, query)
|> order_by([object: obj], desc: obj.data["published"])
|> Pleroma.Repo.all()
rescue
_ -> maybe_fetch([], user, query)

View file

@ -10,7 +10,7 @@ defmodule Pleroma.Signature do
alias Pleroma.User
alias Pleroma.Web.ActivityPub.ActivityPub
@known_suffixes ["/publickey", "/main-key"]
@known_suffixes ["/publickey", "/main-key", "#key"]
def key_id_to_actor_id(key_id) do
uri =

View file

@ -13,7 +13,6 @@ defmodule Pleroma.Upload do
* `:uploader`: override uploader
* `:filters`: override filters
* `:size_limit`: override size limit
* `:activity_type`: override activity type
The `%Pleroma.Upload{}` struct: all documented fields are meant to be overwritten in filters:
@ -48,7 +47,6 @@ defmodule Pleroma.Upload do
@type option ::
{:type, :avatar | :banner | :background}
| {:description, String.t()}
| {:activity_type, String.t()}
| {:size_limit, nil | non_neg_integer()}
| {:uploader, module()}
| {:filters, [module()]}
@ -143,7 +141,7 @@ defmodule Pleroma.Upload do
end
%{
activity_type: Keyword.get(opts, :activity_type, activity_type),
activity_type: activity_type,
size_limit: Keyword.get(opts, :size_limit, size_limit),
uploader: Keyword.get(opts, :uploader, Pleroma.Config.get([__MODULE__, :uploader])),
filters:

View file

@ -33,8 +33,7 @@ defmodule Pleroma.Upload.Filter.Exiftool.ReadDescription do
defp read_when_empty(_, file, tag) do
try do
{tag_content, 0} =
System.cmd("exiftool", ["-b", "-s3", tag, file],
stderr_to_stdout: true,
System.cmd("exiftool", ["-b", "-s3", "-ignoreMinorErrors", "-q", "-q", tag, file],
parallelism: true
)

View file

@ -1624,8 +1624,12 @@ defmodule Pleroma.User do
def blocks_user?(_, _), do: false
def blocks_domain?(%User{} = user, %User{} = target) do
%{host: host} = URI.parse(target.ap_id)
def blocks_domain?(%User{} = user, %User{ap_id: ap_id}) do
blocks_domain?(user, ap_id)
end
def blocks_domain?(%User{} = user, url) when is_binary(url) do
%{host: host} = URI.parse(url)
Enum.member?(user.domain_blocks, host)
# TODO: functionality should probably be changed such that subdomains block as well,
# but as it stands, this just hecks up the relationships endpoint

View file

@ -265,35 +265,6 @@ defmodule Pleroma.Web do
end
end
def html do
quote do
use Phoenix.Component
# Import convenience functions from controllers
import Phoenix.Controller,
only: [get_csrf_token: 0, view_module: 1, view_template: 1]
# Include general helpers for rendering HTML
unquote(html_helpers())
end
end
defp html_helpers do
quote do
# HTML escaping functionality
import Phoenix.HTML
# Core UI components and translation
import Pleroma.Web.CoreComponents
import Pleroma.Web.Gettext
# Shortcut for generating JS commands
alias Phoenix.LiveView.JS
# Routes generation with the ~p sigil
unquote(verified_routes())
end
end
def mailer do
quote do
unquote(verified_routes())

View file

@ -155,9 +155,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
# Splice in the child object if we have one.
activity = Maps.put_if_present(activity, :object, object)
ConcurrentLimiter.limit(Pleroma.Web.RichMedia.Helpers, fn ->
Task.start(fn -> Pleroma.Web.RichMedia.Helpers.fetch_data_for_activity(activity) end)
end)
Pleroma.Web.RichMedia.Card.get_by_activity(activity)
# Add local posts to search index
if local, do: Pleroma.Search.add_to_index(activity)
@ -185,7 +183,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
id: "pleroma:fakeid"
}
Pleroma.Web.RichMedia.Helpers.fetch_data_for_activity(activity)
Pleroma.Web.RichMedia.Card.get_by_activity(activity)
{:ok, activity}
{:remote_limit_pass, _} ->
@ -1545,11 +1543,19 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
defp normalize_also_known_as(aka) when is_binary(aka), do: [aka]
defp normalize_also_known_as(nil), do: []
defp normalize_attachment(%{} = attachment), do: [attachment]
defp normalize_attachment(attachment) when is_list(attachment), do: attachment
defp normalize_attachment(_), do: []
defp object_to_user_data(data, additional) do
fields =
data
|> Map.get("attachment", [])
|> Enum.filter(fn %{"type" => t} -> t == "PropertyValue" end)
|> normalize_attachment()
|> Enum.filter(fn
%{"type" => t} -> t == "PropertyValue"
_ -> false
end)
|> Enum.map(fn fields -> Map.take(fields, ["name", "value"]) end)
emojis =
@ -1816,19 +1822,20 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
end
end
def pinned_fetch_task(nil), do: nil
def pinned_fetch_task(%{pinned_objects: pins}) do
if Enum.all?(pins, fn {ap_id, _} ->
Object.get_cached_by_ap_id(ap_id) ||
match?({:ok, _object}, Fetcher.fetch_object_from_id(ap_id))
end) do
:ok
else
:error
end
def enqueue_pin_fetches(%{pinned_objects: pins}) do
# enqueue a task to fetch all pinned objects
Enum.each(pins, fn {ap_id, _} ->
if is_nil(Object.get_cached_by_ap_id(ap_id)) do
Pleroma.Workers.RemoteFetcherWorker.enqueue("fetch_remote", %{
"id" => ap_id,
"depth" => 1
})
end
end)
end
def enqueue_pin_fetches(_), do: nil
def make_user_from_ap_id(ap_id, additional \\ []) do
user = User.get_cached_by_ap_id(ap_id)
@ -1836,8 +1843,6 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
Transmogrifier.upgrade_user_from_ap_id(ap_id)
else
with {:ok, data} <- fetch_and_prepare_user_from_ap_id(ap_id, additional) do
{:ok, _pid} = Task.start(fn -> pinned_fetch_task(data) end)
user =
if data.ap_id != ap_id do
User.get_cached_by_ap_id(data.ap_id)
@ -1849,6 +1854,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
user
|> User.remote_user_changeset(data)
|> User.update_and_set_cache()
|> tap(fn _ -> enqueue_pin_fetches(data) end)
else
maybe_handle_clashing_nickname(data)
@ -1856,6 +1862,7 @@ defmodule Pleroma.Web.ActivityPub.ActivityPub do
|> User.remote_user_changeset()
|> Repo.insert()
|> User.set_cache()
|> tap(fn _ -> enqueue_pin_fetches(data) end)
end
end
end

View file

@ -233,7 +233,7 @@ defmodule Pleroma.Web.ActivityPub.MRF do
if function_exported?(policy, :config_description, 0) do
description =
@default_description
|> Map.merge(policy.config_description)
|> Map.merge(policy.config_description())
|> Map.put(:group, :pleroma)
|> Map.put(:tab, :mrf)
|> Map.put(:type, :group)

View file

@ -101,10 +101,19 @@ defmodule Pleroma.Web.ActivityPub.MRF.StealEmojiPolicy do
end
end
defp get_int_header(headers, header_name, default \\ nil) do
with rawval when rawval != :undefined <- :proplists.get_value(header_name, headers),
{int, ""} <- Integer.parse(rawval) do
int
else
_ -> default
end
end
defp is_remote_size_within_limit?(url) do
with {:ok, %{status: status, headers: headers} = _response} when status in 200..299 <-
Pleroma.HTTP.request(:head, url, nil, [], []) do
content_length = :proplists.get_value("content-length", headers, nil)
content_length = get_int_header(headers, "content-length")
size_limit = Config.get([:mrf_steal_emoji, :size_limit], @size_limit)
accept_unknown =
@ -172,7 +181,7 @@ defmodule Pleroma.Web.ActivityPub.MRF.StealEmojiPolicy do
description: <<_::272, _::_*256>>,
key: :hosts | :rejected_shortcodes | :size_limit,
suggestions: [any(), ...],
type: {:list, :string} | {:list, :string} | :integer
type: {:list, :string} | {:list, :string} | :integer | :boolean
},
...
],
@ -209,6 +218,12 @@ defmodule Pleroma.Web.ActivityPub.MRF.StealEmojiPolicy do
type: :integer,
description: "File size limit (in bytes), checked before an emoji is saved to the disk",
suggestions: ["100000"]
},
%{
key: :download_unknown_size,
type: :boolean,
description: "Whether to download emoji if size can't be determined ahead of time",
suggestions: [false, true]
}
]
}

View file

@ -225,9 +225,7 @@ defmodule Pleroma.Web.ActivityPub.SideEffects do
end
end
ConcurrentLimiter.limit(Pleroma.Web.RichMedia.Helpers, fn ->
Task.start(fn -> Pleroma.Web.RichMedia.Helpers.fetch_data_for_activity(activity) end)
end)
Pleroma.Web.RichMedia.Card.get_by_activity(activity)
Pleroma.Search.add_to_index(Map.put(activity, :object, object))

View file

@ -1034,7 +1034,7 @@ defmodule Pleroma.Web.ActivityPub.Transmogrifier do
with %User{local: false} = user <- User.get_cached_by_ap_id(ap_id),
{:ok, data} <- ActivityPub.fetch_and_prepare_user_from_ap_id(ap_id),
{:ok, user} <- update_user(user, data) do
{:ok, _pid} = Task.start(fn -> ActivityPub.pinned_fetch_task(user) end)
ActivityPub.enqueue_pin_fetches(user)
TransmogrifierWorker.enqueue("user_upgrade", %{"user_id" => user.id})
{:ok, user}
else

View file

@ -1,35 +0,0 @@
defmodule Pleroma.Web.AdminControl.AdminControlController do
use Pleroma.Web, :controller
plug(:put_root_layout, {Pleroma.Web.AdminControl.AdminControlView, :layout})
plug(:put_layout, false)
defp label_for(%{label: label}), do: label
defp label_for(_), do: "Unknown"
defp descriptions, do: Pleroma.Docs.JSON.compiled_descriptions()
def config_headings do
descriptions()
|> Enum.map(&label_for(&1))
|> Enum.sort()
end
def config_values(%{"heading" => heading}) do
IO.inspect(heading)
possible_values =
descriptions()
|> Enum.filter(fn section -> label_for(section) == heading end)
possible_values
end
def config_values(_), do: []
def index(conn, params) do
IO.inspect(params)
render(conn, :index, config_values: config_values(params), config_headings: config_headings())
end
end

View file

@ -1,108 +0,0 @@
defmodule Pleroma.Web.AdminControl.AdminControlView do
use Pleroma.Web, :html
require Logger
embed_templates "admin_control_html/*"
defp atomize(":" <> key), do: String.to_existing_atom(key)
defp value_of(%{config_value: %{key: child_key}, parent_key: parent_key}) when is_binary(parent_key) do
parent_atom = atomize(parent_key)
child_atom = atomize(child_key)
Pleroma.Config.get([parent_atom, child_atom])
|> to_string()
end
attr :config_value, :map, required: true
attr :parent_key, :string, required: false
def config_value(%{config_value: %{type: :group} = value} = assigns) do
~H"""
<div class="config-group">
<h3 class="config-group-title text-2xl"><%= @config_value.label %></h3>
<p class="ml-2"><%= @config_value.description %></p>
<div class="ml-3">
<%= for child_value <- @config_value.children do %>
<.config_value config_value={child_value} parent_key={@config_value.key} />
<% end %>
</div>
</div>
"""
end
def config_value(%{config_value: %{type: :integer, key: key} = value} = assigns) do
value = value_of(assigns)
assigns = assign(assigns, value: value, key: key)
~H"""
<div>
<label for={@key} class="block text-sm font-medium leading-6 text-white"><%= @config_value.label %></label>
<div class="mt-2">
<input type="number" name={@key} id={@key} value={@value} class="block w-full rounded-md border-0 bg-white/5 py-1.5 text-white shadow-sm ring-1 ring-inset ring-white/10 focus:ring-2 focus:ring-inset focus:ring-indigo-500 sm:text-sm sm:leading-6">
</div>
<p class="mt-2 text-sm text-gray-500"><%= @config_value.description %></p>
</div>
"""
end
def config_value(%{config_value: %{type: :string, key: key} = value} = assigns) do
value = value_of(assigns)
assigns = assign(assigns, value: value, key: key)
~H"""
<div>
<label for={@key} class="block text-sm font-medium leading-6 text-white"><%= @config_value.label %></label>
<div class="mt-2">
<input type="text" name={@key} id={@key} value={@value} class="block w-full rounded-md border-0 bg-white/5 py-1.5 text-white shadow-sm ring-1 ring-inset ring-white/10 focus:ring-2 focus:ring-inset focus:ring-indigo-500 sm:text-sm sm:leading-6">
</div>
<p class="mt-2 text-sm text-gray-500"><%= @config_value.description %></p>
</div>
"""
end
def config_value(%{config_value: %{type: :boolean, key: key} = value} = assigns) do
value = value_of(assigns) == "true"
assigns = assign(assigns, value: value, key: key)
~H"""
<div>
<label for={@key} class="block text-sm font-medium leading-6 text-white"><%= @config_value.label %></label>
<div class="mt-2">
<p class="mt-2 text-sm text-gray-500"><input type="checkbox" name={@key} id={@key} checked={@value} class="rounded-md px-2"> <%= @config_value.description %></p>
</div>
</div>
"""
end
def config_value(%{config_value: %{type: {:list, :string}} = value} = assigns) do
value = value_of(assigns)
assigns = assign(assigns, value: value)
~H"""
<div class="config-group">
<h3 class="config-group-title"><%= @config_value.label %></h3>
<span class="ml-2"><%= @config_value.description %></span>
<%= @value %>
</div>
"""
end
def config_value(assigns) do
Logger.info("Cannot render config!")
IO.inspect(assigns)
~H"""
Cannot render
"""
end
attr :config_values, :list, required: true
def config_values(%{config_values: config_values} = assigns) do
~H"""
<div class="config-values text-white">
<%= for value <- @config_values do %>
<.config_value config_value={value} />
<% end %>
</div>
"""
end
end

View file

@ -1,27 +0,0 @@
<ul role="list" class="-mx-2 space-y-1">
<%= for heading <- @config_headings do %>
<li>
<!-- Current: "bg-gray-800 text-white", Default: "text-gray-400 hover:text-white hover:bg-gray-800" -->
<a
href={"?heading="<>heading}
class="text-gray-400 hover:text-white hover:bg-gray-800 group flex gap-x-3 rounded-md p-2 text-sm leading-6 font-semibold"
>
<svg
class="h-6 w-6 shrink-0"
fill="none"
viewBox="0 0 24 24"
stroke-width="1.5"
stroke="currentColor"
aria-hidden="true"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
d="M2.25 12.75V12A2.25 2.25 0 014.5 9.75h15A2.25 2.25 0 0121.75 12v.75m-8.69-6.44l-2.12-2.12a1.5 1.5 0 00-1.061-.44H4.5A2.25 2.25 0 002.25 6v12a2.25 2.25 0 002.25 2.25h15A2.25 2.25 0 0021.75 18V9a2.25 2.25 0 00-2.25-2.25h-5.379a1.5 1.5 0 01-1.06-.44z"
/>
</svg>
<%= heading %>
</a>
</li>
<% end %>
</ul>

View file

@ -1,27 +0,0 @@
<div>
<!-- Static sidebar for desktop -->
<div class="fixed inset-y-0 z-50 flex w-72 flex-col">
<!-- Sidebar component, swap this element with another sidebar if you like -->
<div class="flex grow flex-col gap-y-5 overflow-y-auto bg-black/10 px-6 ring-1 ring-white/5">
<div class="flex h-16 shrink-0 items-center text-white">
Akkoma
</div>
<nav class="flex flex-1 flex-col">
<ul role="list" class="flex flex-1 flex-col gap-y-7">
<li>
<.config_heading_menu config_headings={@config_headings} />
</li>
</ul>
</nav>
</div>
</div>
<div class="pl-72">
<main>
<header class="flex items-center justify-between border-b border-white/5 px-4 py-4 sm:px-6 sm:py-6 lg:px-8">
<h1 class="text-base font-semibold leading-7 text-white">Things</h1>
</header>
<.config_values config_values={@config_values} />
</main>
</div>
</div>

View file

@ -1,17 +0,0 @@
<!DOCTYPE html>
<html lang="en" class="[scrollbar-gutter:stable] h-full bg-gray-900">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="csrf-token" content={get_csrf_token()} />
<.live_title suffix=" · Akkoma Admin Control">
<%= assigns[:page_title] || "A" %>
</.live_title>
<link phx-track-static rel="stylesheet" href={~p"/assets/app.css"} />
<script defer phx-track-static type="text/javascript" src={~p"/assets/app.js"}>
</script>
</head>
<body class="antialiased h-full">
<%= @inner_content %>
</body>
</html>

View file

@ -41,7 +41,7 @@ defmodule Pleroma.Web.CommonAPI.ActivityDraft do
preview?: false,
changes: %{}
def new(user, params) do
defp new(user, params) do
%__MODULE__{user: user}
|> put_params(params)
end
@ -92,9 +92,14 @@ defmodule Pleroma.Web.CommonAPI.ActivityDraft do
end
end
defp attachments(%{params: params} = draft) do
attachments = Utils.attachments_from_ids(params)
%__MODULE__{draft | attachments: attachments}
defp attachments(%{params: params, user: user} = draft) do
case Utils.attachments_from_ids(user, params) do
attachments when is_list(attachments) ->
%__MODULE__{draft | attachments: attachments}
{:error, reason} ->
add_error(draft, reason)
end
end
defp in_reply_to(%{params: %{in_reply_to_status_id: ""}} = draft), do: draft

View file

@ -22,43 +22,31 @@ defmodule Pleroma.Web.CommonAPI.Utils do
require Logger
require Pleroma.Constants
def attachments_from_ids(%{media_ids: ids, descriptions: desc}) do
attachments_from_ids_descs(ids, desc)
def attachments_from_ids(user, %{media_ids: ids}) do
attachments_from_ids(user, ids, [])
end
def attachments_from_ids(%{media_ids: ids}) do
attachments_from_ids_no_descs(ids)
def attachments_from_ids(_, _), do: []
defp attachments_from_ids(_user, [], acc), do: Enum.reverse(acc)
defp attachments_from_ids(user, [media_id | ids], acc) do
with {_, %Object{} = object} <- {:get, get_attachment(media_id)},
:ok <- Object.authorize_access(object, user) do
attachments_from_ids(user, ids, [object.data | acc])
else
{:get, _} -> attachments_from_ids(user, ids, acc)
{:error, reason} -> {:error, reason}
end
end
def attachments_from_ids(_), do: []
def attachments_from_ids_no_descs([]), do: []
def attachments_from_ids_no_descs(ids) do
Enum.map(ids, fn media_id ->
case get_attachment(media_id) do
%Object{data: data} -> data
_ -> nil
end
end)
|> Enum.reject(&is_nil/1)
end
def attachments_from_ids_descs([], _), do: []
def attachments_from_ids_descs(ids, descs_str) do
{_, descs} = Jason.decode(descs_str)
Enum.map(ids, fn media_id ->
with %Object{data: data} <- get_attachment(media_id) do
Map.put(data, "name", descs[media_id])
end
end)
|> Enum.reject(&is_nil/1)
end
defp get_attachment(media_id) do
Repo.get(Object, media_id)
def get_attachment(media_id) do
with %Object{} = object <- Repo.get(Object, media_id),
true <- object.data["type"] in Pleroma.Constants.attachment_types() do
object
else
_ -> nil
end
end
@spec get_to_and_cc(ActivityDraft.t()) :: {list(String.t()), list(String.t())}

View file

@ -1,676 +0,0 @@
defmodule Pleroma.Web.CoreComponents do
@moduledoc """
Provides core UI components.
At first glance, this module may seem daunting, but its goal is to provide
core building blocks for your application, such as modals, tables, and
forms. The components consist mostly of markup and are well-documented
with doc strings and declarative assigns. You may customize and style
them in any way you want, based on your application growth and needs.
The default components use Tailwind CSS, a utility-first CSS framework.
See the [Tailwind CSS documentation](https://tailwindcss.com) to learn
how to customize them or feel free to swap in another framework altogether.
Icons are provided by [heroicons](https://heroicons.com). See `icon/1` for usage.
"""
use Phoenix.Component
alias Phoenix.LiveView.JS
import Pleroma.Web.Gettext
@doc """
Renders a modal.
## Examples
<.modal id="confirm-modal">
This is a modal.
</.modal>
JS commands may be passed to the `:on_cancel` to configure
the closing/cancel event, for example:
<.modal id="confirm" on_cancel={JS.navigate(~p"/posts")}>
This is another modal.
</.modal>
"""
attr :id, :string, required: true
attr :show, :boolean, default: false
attr :on_cancel, JS, default: %JS{}
slot :inner_block, required: true
def modal(assigns) do
~H"""
<div
id={@id}
phx-mounted={@show && show_modal(@id)}
phx-remove={hide_modal(@id)}
data-cancel={JS.exec(@on_cancel, "phx-remove")}
class="relative z-50 hidden"
>
<div id={"#{@id}-bg"} class="bg-zinc-50/90 fixed inset-0 transition-opacity" aria-hidden="true" />
<div
class="fixed inset-0 overflow-y-auto"
aria-labelledby={"#{@id}-title"}
aria-describedby={"#{@id}-description"}
role="dialog"
aria-modal="true"
tabindex="0"
>
<div class="flex min-h-full items-center justify-center">
<div class="w-full max-w-3xl p-4 sm:p-6 lg:py-8">
<.focus_wrap
id={"#{@id}-container"}
phx-window-keydown={JS.exec("data-cancel", to: "##{@id}")}
phx-key="escape"
phx-click-away={JS.exec("data-cancel", to: "##{@id}")}
class="shadow-zinc-700/10 ring-zinc-700/10 relative hidden rounded-2xl bg-white p-14 shadow-lg ring-1 transition"
>
<div class="absolute top-6 right-5">
<button
phx-click={JS.exec("data-cancel", to: "##{@id}")}
type="button"
class="-m-3 flex-none p-3 opacity-20 hover:opacity-40"
aria-label={gettext("close")}
>
<.icon name="hero-x-mark-solid" class="h-5 w-5" />
</button>
</div>
<div id={"#{@id}-content"}>
<%= render_slot(@inner_block) %>
</div>
</.focus_wrap>
</div>
</div>
</div>
</div>
"""
end
@doc """
Renders flash notices.
## Examples
<.flash kind={:info} flash={@flash} />
<.flash kind={:info} phx-mounted={show("#flash")}>Welcome Back!</.flash>
"""
attr :id, :string, doc: "the optional id of flash container"
attr :flash, :map, default: %{}, doc: "the map of flash messages to display"
attr :title, :string, default: nil
attr :kind, :atom, values: [:info, :error], doc: "used for styling and flash lookup"
attr :rest, :global, doc: "the arbitrary HTML attributes to add to the flash container"
slot :inner_block, doc: "the optional inner block that renders the flash message"
def flash(assigns) do
assigns = assign_new(assigns, :id, fn -> "flash-#{assigns.kind}" end)
~H"""
<div
:if={msg = render_slot(@inner_block) || Phoenix.Flash.get(@flash, @kind)}
id={@id}
phx-click={JS.push("lv:clear-flash", value: %{key: @kind}) |> hide("##{@id}")}
role="alert"
class={[
"fixed top-2 right-2 mr-2 w-80 sm:w-96 z-50 rounded-lg p-3 ring-1",
@kind == :info && "bg-emerald-50 text-emerald-800 ring-emerald-500 fill-cyan-900",
@kind == :error && "bg-rose-50 text-rose-900 shadow-md ring-rose-500 fill-rose-900"
]}
{@rest}
>
<p :if={@title} class="flex items-center gap-1.5 text-sm font-semibold leading-6">
<.icon :if={@kind == :info} name="hero-information-circle-mini" class="h-4 w-4" />
<.icon :if={@kind == :error} name="hero-exclamation-circle-mini" class="h-4 w-4" />
<%= @title %>
</p>
<p class="mt-2 text-sm leading-5"><%= msg %></p>
<button type="button" class="group absolute top-1 right-1 p-2" aria-label={gettext("close")}>
<.icon name="hero-x-mark-solid" class="h-5 w-5 opacity-40 group-hover:opacity-70" />
</button>
</div>
"""
end
@doc """
Shows the flash group with standard titles and content.
## Examples
<.flash_group flash={@flash} />
"""
attr :flash, :map, required: true, doc: "the map of flash messages"
attr :id, :string, default: "flash-group", doc: "the optional id of flash container"
def flash_group(assigns) do
~H"""
<div id={@id}>
<.flash kind={:info} title={gettext("Success!")} flash={@flash} />
<.flash kind={:error} title={gettext("Error!")} flash={@flash} />
<.flash
id="client-error"
kind={:error}
title={gettext("We can't find the internet")}
phx-disconnected={show(".phx-client-error #client-error")}
phx-connected={hide("#client-error")}
hidden
>
<%= gettext("Attempting to reconnect") %>
<.icon name="hero-arrow-path" class="ml-1 h-3 w-3 animate-spin" />
</.flash>
<.flash
id="server-error"
kind={:error}
title={gettext("Something went wrong!")}
phx-disconnected={show(".phx-server-error #server-error")}
phx-connected={hide("#server-error")}
hidden
>
<%= gettext("Hang in there while we get back on track") %>
<.icon name="hero-arrow-path" class="ml-1 h-3 w-3 animate-spin" />
</.flash>
</div>
"""
end
@doc """
Renders a simple form.
## Examples
<.simple_form for={@form} phx-change="validate" phx-submit="save">
<.input field={@form[:email]} label="Email"/>
<.input field={@form[:username]} label="Username" />
<:actions>
<.button>Save</.button>
</:actions>
</.simple_form>
"""
attr :for, :any, required: true, doc: "the datastructure for the form"
attr :as, :any, default: nil, doc: "the server side parameter to collect all input under"
attr :rest, :global,
include: ~w(autocomplete name rel action enctype method novalidate target multipart),
doc: "the arbitrary HTML attributes to apply to the form tag"
slot :inner_block, required: true
slot :actions, doc: "the slot for form actions, such as a submit button"
def simple_form(assigns) do
~H"""
<.form :let={f} for={@for} as={@as} {@rest}>
<div class="mt-10 space-y-8 bg-white">
<%= render_slot(@inner_block, f) %>
<div :for={action <- @actions} class="mt-2 flex items-center justify-between gap-6">
<%= render_slot(action, f) %>
</div>
</div>
</.form>
"""
end
@doc """
Renders a button.
## Examples
<.button>Send!</.button>
<.button phx-click="go" class="ml-2">Send!</.button>
"""
attr :type, :string, default: nil
attr :class, :string, default: nil
attr :rest, :global, include: ~w(disabled form name value)
slot :inner_block, required: true
def button(assigns) do
~H"""
<button
type={@type}
class={[
"phx-submit-loading:opacity-75 rounded-lg bg-zinc-900 hover:bg-zinc-700 py-2 px-3",
"text-sm font-semibold leading-6 text-white active:text-white/80",
@class
]}
{@rest}
>
<%= render_slot(@inner_block) %>
</button>
"""
end
@doc """
Renders an input with label and error messages.
A `Phoenix.HTML.FormField` may be passed as argument,
which is used to retrieve the input name, id, and values.
Otherwise all attributes may be passed explicitly.
## Types
This function accepts all HTML input types, considering that:
* You may also set `type="select"` to render a `<select>` tag
* `type="checkbox"` is used exclusively to render boolean values
* For live file uploads, see `Phoenix.Component.live_file_input/1`
See https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input
for more information. Unsupported types, such as hidden and radio,
are best written directly in your templates.
## Examples
<.input field={@form[:email]} type="email" />
<.input name="my-input" errors={["oh no!"]} />
"""
attr :id, :any, default: nil
attr :name, :any
attr :label, :string, default: nil
attr :value, :any
attr :type, :string,
default: "text",
values: ~w(checkbox color date datetime-local email file month number password
range search select tel text textarea time url week)
attr :field, Phoenix.HTML.FormField,
doc: "a form field struct retrieved from the form, for example: @form[:email]"
attr :errors, :list, default: []
attr :checked, :boolean, doc: "the checked flag for checkbox inputs"
attr :prompt, :string, default: nil, doc: "the prompt for select inputs"
attr :options, :list, doc: "the options to pass to Phoenix.HTML.Form.options_for_select/2"
attr :multiple, :boolean, default: false, doc: "the multiple flag for select inputs"
attr :rest, :global,
include: ~w(accept autocomplete capture cols disabled form list max maxlength min minlength
multiple pattern placeholder readonly required rows size step)
slot :inner_block
def input(%{field: %Phoenix.HTML.FormField{} = field} = assigns) do
assigns
|> assign(field: nil, id: assigns.id || field.id)
|> assign(:errors, Enum.map(field.errors, &translate_error(&1)))
|> assign_new(:name, fn -> if assigns.multiple, do: field.name <> "[]", else: field.name end)
|> assign_new(:value, fn -> field.value end)
|> input()
end
def input(%{type: "checkbox"} = assigns) do
assigns =
assign_new(assigns, :checked, fn ->
Phoenix.HTML.Form.normalize_value("checkbox", assigns[:value])
end)
~H"""
<div phx-feedback-for={@name}>
<label class="flex items-center gap-4 text-sm leading-6 text-zinc-600">
<input type="hidden" name={@name} value="false" />
<input
type="checkbox"
id={@id}
name={@name}
value="true"
checked={@checked}
class="rounded border-zinc-300 text-zinc-900 focus:ring-0"
{@rest}
/>
<%= @label %>
</label>
<.error :for={msg <- @errors}><%= msg %></.error>
</div>
"""
end
def input(%{type: "select"} = assigns) do
~H"""
<div phx-feedback-for={@name}>
<.label for={@id}><%= @label %></.label>
<select
id={@id}
name={@name}
class="mt-2 block w-full rounded-md border border-gray-300 bg-white shadow-sm focus:border-zinc-400 focus:ring-0 sm:text-sm"
multiple={@multiple}
{@rest}
>
<option :if={@prompt} value=""><%= @prompt %></option>
<%= Phoenix.HTML.Form.options_for_select(@options, @value) %>
</select>
<.error :for={msg <- @errors}><%= msg %></.error>
</div>
"""
end
def input(%{type: "textarea"} = assigns) do
~H"""
<div phx-feedback-for={@name}>
<.label for={@id}><%= @label %></.label>
<textarea
id={@id}
name={@name}
class={[
"mt-2 block w-full rounded-lg text-zinc-900 focus:ring-0 sm:text-sm sm:leading-6",
"min-h-[6rem] phx-no-feedback:border-zinc-300 phx-no-feedback:focus:border-zinc-400",
@errors == [] && "border-zinc-300 focus:border-zinc-400",
@errors != [] && "border-rose-400 focus:border-rose-400"
]}
{@rest}
><%= Phoenix.HTML.Form.normalize_value("textarea", @value) %></textarea>
<.error :for={msg <- @errors}><%= msg %></.error>
</div>
"""
end
# All other inputs text, datetime-local, url, password, etc. are handled here...
def input(assigns) do
~H"""
<div phx-feedback-for={@name}>
<.label for={@id}><%= @label %></.label>
<input
type={@type}
name={@name}
id={@id}
value={Phoenix.HTML.Form.normalize_value(@type, @value)}
class={[
"mt-2 block w-full rounded-lg text-zinc-900 focus:ring-0 sm:text-sm sm:leading-6",
"phx-no-feedback:border-zinc-300 phx-no-feedback:focus:border-zinc-400",
@errors == [] && "border-zinc-300 focus:border-zinc-400",
@errors != [] && "border-rose-400 focus:border-rose-400"
]}
{@rest}
/>
<.error :for={msg <- @errors}><%= msg %></.error>
</div>
"""
end
@doc """
Renders a label.
"""
attr :for, :string, default: nil
slot :inner_block, required: true
def label(assigns) do
~H"""
<label for={@for} class="block text-sm font-semibold leading-6 text-zinc-800">
<%= render_slot(@inner_block) %>
</label>
"""
end
@doc """
Generates a generic error message.
"""
slot :inner_block, required: true
def error(assigns) do
~H"""
<p class="mt-3 flex gap-3 text-sm leading-6 text-rose-600 phx-no-feedback:hidden">
<.icon name="hero-exclamation-circle-mini" class="mt-0.5 h-5 w-5 flex-none" />
<%= render_slot(@inner_block) %>
</p>
"""
end
@doc """
Renders a header with title.
"""
attr :class, :string, default: nil
slot :inner_block, required: true
slot :subtitle
slot :actions
def header(assigns) do
~H"""
<header class={[@actions != [] && "flex items-center justify-between gap-6", @class]}>
<div>
<h1 class="text-lg font-semibold leading-8 text-zinc-800">
<%= render_slot(@inner_block) %>
</h1>
<p :if={@subtitle != []} class="mt-2 text-sm leading-6 text-zinc-600">
<%= render_slot(@subtitle) %>
</p>
</div>
<div class="flex-none"><%= render_slot(@actions) %></div>
</header>
"""
end
@doc ~S"""
Renders a table with generic styling.
## Examples
<.table id="users" rows={@users}>
<:col :let={user} label="id"><%= user.id %></:col>
<:col :let={user} label="username"><%= user.username %></:col>
</.table>
"""
attr :id, :string, required: true
attr :rows, :list, required: true
attr :row_id, :any, default: nil, doc: "the function for generating the row id"
attr :row_click, :any, default: nil, doc: "the function for handling phx-click on each row"
attr :row_item, :any,
default: &Function.identity/1,
doc: "the function for mapping each row before calling the :col and :action slots"
slot :col, required: true do
attr :label, :string
end
slot :action, doc: "the slot for showing user actions in the last table column"
def table(assigns) do
assigns =
with %{rows: %Phoenix.LiveView.LiveStream{}} <- assigns do
assign(assigns, row_id: assigns.row_id || fn {id, _item} -> id end)
end
~H"""
<div class="overflow-y-auto px-4 sm:overflow-visible sm:px-0">
<table class="w-[40rem] mt-11 sm:w-full">
<thead class="text-sm text-left leading-6 text-zinc-500">
<tr>
<th :for={col <- @col} class="p-0 pb-4 pr-6 font-normal"><%= col[:label] %></th>
<th :if={@action != []} class="relative p-0 pb-4">
<span class="sr-only"><%= gettext("Actions") %></span>
</th>
</tr>
</thead>
<tbody
id={@id}
phx-update={match?(%Phoenix.LiveView.LiveStream{}, @rows) && "stream"}
class="relative divide-y divide-zinc-100 border-t border-zinc-200 text-sm leading-6 text-zinc-700"
>
<tr :for={row <- @rows} id={@row_id && @row_id.(row)} class="group hover:bg-zinc-50">
<td
:for={{col, i} <- Enum.with_index(@col)}
phx-click={@row_click && @row_click.(row)}
class={["relative p-0", @row_click && "hover:cursor-pointer"]}
>
<div class="block py-4 pr-6">
<span class="absolute -inset-y-px right-0 -left-4 group-hover:bg-zinc-50 sm:rounded-l-xl" />
<span class={["relative", i == 0 && "font-semibold text-zinc-900"]}>
<%= render_slot(col, @row_item.(row)) %>
</span>
</div>
</td>
<td :if={@action != []} class="relative w-14 p-0">
<div class="relative whitespace-nowrap py-4 text-right text-sm font-medium">
<span class="absolute -inset-y-px -right-4 left-0 group-hover:bg-zinc-50 sm:rounded-r-xl" />
<span
:for={action <- @action}
class="relative ml-4 font-semibold leading-6 text-zinc-900 hover:text-zinc-700"
>
<%= render_slot(action, @row_item.(row)) %>
</span>
</div>
</td>
</tr>
</tbody>
</table>
</div>
"""
end
@doc """
Renders a data list.
## Examples
<.list>
<:item title="Title"><%= @post.title %></:item>
<:item title="Views"><%= @post.views %></:item>
</.list>
"""
slot :item, required: true do
attr :title, :string, required: true
end
def list(assigns) do
~H"""
<div class="mt-14">
<dl class="-my-4 divide-y divide-zinc-100">
<div :for={item <- @item} class="flex gap-4 py-4 text-sm leading-6 sm:gap-8">
<dt class="w-1/4 flex-none text-zinc-500"><%= item.title %></dt>
<dd class="text-zinc-700"><%= render_slot(item) %></dd>
</div>
</dl>
</div>
"""
end
@doc """
Renders a back navigation link.
## Examples
<.back navigate={~p"/posts"}>Back to posts</.back>
"""
attr :navigate, :any, required: true
slot :inner_block, required: true
def back(assigns) do
~H"""
<div class="mt-16">
<.link
navigate={@navigate}
class="text-sm font-semibold leading-6 text-zinc-900 hover:text-zinc-700"
>
<.icon name="hero-arrow-left-solid" class="h-3 w-3" />
<%= render_slot(@inner_block) %>
</.link>
</div>
"""
end
@doc """
Renders a [Heroicon](https://heroicons.com).
Heroicons come in three styles outline, solid, and mini.
By default, the outline style is used, but solid and mini may
be applied by using the `-solid` and `-mini` suffix.
You can customize the size and colors of the icons by setting
width, height, and background color classes.
Icons are extracted from the `deps/heroicons` directory and bundled within
your compiled app.css by the plugin in your `assets/tailwind.config.js`.
## Examples
<.icon name="hero-x-mark-solid" />
<.icon name="hero-arrow-path" class="ml-1 w-3 h-3 animate-spin" />
"""
attr :name, :string, required: true
attr :class, :string, default: nil
def icon(%{name: "hero-" <> _} = assigns) do
~H"""
<span class={[@name, @class]} />
"""
end
## JS Commands
def show(js \\ %JS{}, selector) do
JS.show(js,
to: selector,
transition:
{"transition-all transform ease-out duration-300",
"opacity-0 translate-y-4 sm:translate-y-0 sm:scale-95",
"opacity-100 translate-y-0 sm:scale-100"}
)
end
def hide(js \\ %JS{}, selector) do
JS.hide(js,
to: selector,
time: 200,
transition:
{"transition-all transform ease-in duration-200",
"opacity-100 translate-y-0 sm:scale-100",
"opacity-0 translate-y-4 sm:translate-y-0 sm:scale-95"}
)
end
def show_modal(js \\ %JS{}, id) when is_binary(id) do
js
|> JS.show(to: "##{id}")
|> JS.show(
to: "##{id}-bg",
transition: {"transition-all transform ease-out duration-300", "opacity-0", "opacity-100"}
)
|> show("##{id}-container")
|> JS.add_class("overflow-hidden", to: "body")
|> JS.focus_first(to: "##{id}-content")
end
def hide_modal(js \\ %JS{}, id) do
js
|> JS.hide(
to: "##{id}-bg",
transition: {"transition-all transform ease-in duration-200", "opacity-100", "opacity-0"}
)
|> hide("##{id}-container")
|> JS.hide(to: "##{id}", transition: {"block", "block", "hidden"})
|> JS.remove_class("overflow-hidden", to: "body")
|> JS.pop_focus()
end
@doc """
Translates an error message using gettext.
"""
def translate_error({msg, opts}) do
# When using gettext, we typically pass the strings we want
# to translate as a static argument:
#
# # Translate the number of files with plural rules
# dngettext("errors", "1 file", "%{count} files", count)
#
# However the error messages in our forms and APIs are generated
# dynamically, so we need to translate them by calling Gettext
# with our gettext backend as first argument. Translations are
# available in the errors.po file (as we use the "errors" domain).
if count = opts[:count] do
Gettext.dngettext(AWeb.Gettext, "errors", msg, msg, count, opts)
else
Gettext.dgettext(AWeb.Gettext, "errors", msg, opts)
end
end
@doc """
Translates the errors for a field from a keyword list of errors.
"""
def translate_errors(errors, field) when is_list(errors) do
for {^field, {msg, opts}} <- errors, do: translate_error({msg, opts})
end
end

View file

@ -8,6 +8,7 @@ defmodule Pleroma.Web.MastodonAPI.MediaController do
alias Pleroma.Object
alias Pleroma.User
alias Pleroma.Web.ActivityPub.ActivityPub
alias Pleroma.Web.CommonAPI.Utils
alias Pleroma.Web.Plugs.OAuthScopesPlug
action_fallback(Pleroma.Web.MastodonAPI.FallbackController)
@ -55,12 +56,15 @@ defmodule Pleroma.Web.MastodonAPI.MediaController do
@doc "PUT /api/v1/media/:id"
def update(%{assigns: %{user: user}, body_params: %{description: description}} = conn, %{id: id}) do
with %Object{} = object <- Object.get_by_id(id),
with {_, %Object{} = object} <- {:get, Utils.get_attachment(id)},
:ok <- Object.authorize_access(object, user),
{:ok, %Object{data: data}} <- Object.update_data(object, %{"name" => description}) do
attachment_data = Map.put(data, "id", object.id)
render(conn, "attachment.json", %{attachment: attachment_data})
else
{:get, _} -> {:error, :not_found}
e -> e
end
end
@ -68,11 +72,14 @@ defmodule Pleroma.Web.MastodonAPI.MediaController do
@doc "GET /api/v1/media/:id"
def show(%{assigns: %{user: user}} = conn, %{id: id}) do
with %Object{data: data, id: object_id} = object <- Object.get_by_id(id),
with {_, %Object{data: data, id: object_id} = object} <- {:get, Utils.get_attachment(id)},
:ok <- Object.authorize_access(object, user) do
attachment_data = Map.put(data, "id", object_id)
render(conn, "attachment.json", %{attachment: attachment_data})
else
{:get, _} -> {:error, :not_found}
e -> e
end
end

View file

@ -87,7 +87,7 @@ defmodule Pleroma.Web.MastodonAPI.StatusController do
%{scopes: ["write:bookmarks"]} when action in [:bookmark, :unbookmark]
)
@rate_limited_status_actions ~w(reblog unreblog favourite unfavourite create delete)a
@rate_limited_status_actions ~w(reblog unreblog favourite unfavourite create delete update)a
plug(
RateLimiter,

View file

@ -22,17 +22,13 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
alias Pleroma.Web.MediaProxy
alias Pleroma.Web.PleromaAPI.EmojiReactionController
require Logger
alias Pleroma.Web.RichMedia.Card
import Pleroma.Web.ActivityPub.Visibility, only: [get_visibility: 1, visible_for_user?: 2]
# This is a naive way to do this, just spawning a process per activity
# to fetch the preview. However it should be fine considering
# pagination is restricted to 40 activities at a time
defp fetch_rich_media_for_activities(activities) do
Enum.each(activities, fn activity ->
spawn(fn ->
Pleroma.Web.RichMedia.Helpers.fetch_data_for_activity(activity)
end)
Card.get_by_activity(activity)
end)
end
@ -93,9 +89,7 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
# To do: check AdminAPIControllerTest on the reasons behind nil activities in the list
activities = Enum.filter(opts.activities, & &1)
# Start fetching rich media before doing anything else, so that later calls to get the cards
# only block for timeout in the worst case, as opposed to
# length(activities_with_links) * timeout
# Start prefetching rich media before doing anything else
fetch_rich_media_for_activities(activities)
replied_to_activities = get_replied_to_activities(activities)
@ -309,6 +303,12 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
"mastoapi:content:#{chrono_order}"
)
card =
case Card.get_by_activity(activity) do
%Card{} = result -> render("card.json", result)
_ -> nil
end
content_plaintext =
content
|> Activity.HTML.get_cached_stripped_html_for_activity(
@ -318,8 +318,6 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
summary = object.data["summary"] || ""
card = render("card.json", Pleroma.Web.RichMedia.Helpers.fetch_data_for_activity(activity))
url =
if user.local do
url(~p[/notice/#{activity}])
@ -528,37 +526,30 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
}
end
def render("card.json", %{rich_media: rich_media, page_url: page_url}) do
page_url_data = URI.parse(page_url)
page_url_data =
if is_binary(rich_media["url"]) do
URI.merge(page_url_data, URI.parse(rich_media["url"]))
else
page_url_data
end
def render("card.json", %Card{fields: rich_media}) do
page_url_data = URI.parse(rich_media["url"])
page_url = page_url_data |> to_string
image_url_data =
if is_binary(rich_media["image"]) do
URI.parse(rich_media["image"])
else
nil
end
image_url = build_image_url(image_url_data, page_url_data)
image_url = proxied_url(rich_media["image"], page_url_data)
audio_url = proxied_url(rich_media["audio"], page_url_data)
video_url = proxied_url(rich_media["video"], page_url_data)
%{
type: "link",
provider_name: page_url_data.host,
provider_url: page_url_data.scheme <> "://" <> page_url_data.host,
url: page_url,
image: image_url |> MediaProxy.url(),
image: image_url,
image_description: rich_media["image:alt"] || "",
title: rich_media["title"] || "",
description: rich_media["description"] || "",
pleroma: %{
opengraph: rich_media
opengraph:
rich_media
|> Maps.put_if_present("image", image_url)
|> Maps.put_if_present("audio", audio_url)
|> Maps.put_if_present("video", video_url)
}
}
end
@ -636,6 +627,14 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
}
end
defp proxied_url(url, page_url_data) do
if is_binary(url) do
build_image_url(URI.parse(url), page_url_data) |> MediaProxy.url()
else
nil
end
end
def get_reply_to(activity, %{replied_to_activities: replied_to_activities}) do
object = Object.normalize(activity, fetch: false)
@ -740,19 +739,7 @@ defmodule Pleroma.Web.MastodonAPI.StatusView do
defp build_application(_), do: nil
# Workaround for Elixir issue #10771
# Avoid applying URI.merge unless necessary
# TODO: revert to always attempting URI.merge(image_url_data, page_url_data)
# when Elixir 1.12 is the minimum supported version
@spec build_image_url(struct() | nil, struct()) :: String.t() | nil
defp build_image_url(
%URI{scheme: image_scheme, host: image_host} = image_url_data,
%URI{} = _page_url_data
)
when not is_nil(image_scheme) and not is_nil(image_host) do
image_url_data |> to_string
end
defp build_image_url(%URI{} = image_url_data, %URI{} = page_url_data) do
URI.merge(page_url_data, image_url_data) |> to_string
end

View file

@ -18,6 +18,8 @@ defmodule Pleroma.Web.MastodonAPI.WebsocketHandler do
@timeout :timer.seconds(60)
# Hibernate every X messages
@hibernate_every 100
# Tune garabge collect for long-lived websocket process
@fullsweep_after 20
def init(%{qs: qs} = req, state) do
with params <- Enum.into(:cow_qs.parse_qs(qs), %{}),
@ -59,6 +61,10 @@ defmodule Pleroma.Web.MastodonAPI.WebsocketHandler do
"#{__MODULE__} accepted websocket connection for user #{(state.user || %{id: "anonymous"}).id}, topic #{state.topic}"
)
# process is long-lived and can sometimes accumulate stale data in such a way it's
# not freed by young garbage cycles, thus make full collection sweeps more frequent
:erlang.process_flag(:fullsweep_after, @fullsweep_after)
Streamer.add_socket(state.topic, state.oauth_token)
{:ok, %{state | timer: timer()}}
end

View file

@ -52,6 +52,14 @@ defmodule Pleroma.Web.PleromaAPI.EmojiReactionController do
end)
end
defp filter_allowed_users_by_domain(ap_ids, %User{} = for_user) do
Enum.reject(ap_ids, fn ap_id ->
User.blocks_domain?(for_user, ap_id)
end)
end
defp filter_allowed_users_by_domain(ap_ids, nil), do: ap_ids
def filter_allowed_users(reactions, user, with_muted) do
exclude_ap_ids =
if is_nil(user) do
@ -62,7 +70,10 @@ defmodule Pleroma.Web.PleromaAPI.EmojiReactionController do
end
filter_emoji = fn emoji, users, url ->
case filter_allowed_user_by_ap_id(users, exclude_ap_ids) do
users
|> filter_allowed_user_by_ap_id(exclude_ap_ids)
|> filter_allowed_users_by_domain(user)
|> case do
[] -> nil
users -> {emoji, users, url}
end

View file

@ -44,6 +44,26 @@ defmodule Pleroma.Web.Plugs.HTTPSignaturePlug do
def route_aliases(_), do: []
def maybe_put_created_psudoheader(conn) do
case HTTPSignatures.signature_for_conn(conn) do
%{"created" => created} ->
put_req_header(conn, "(created)", created)
_ ->
conn
end
end
def maybe_put_expires_psudoheader(conn) do
case HTTPSignatures.signature_for_conn(conn) do
%{"expires" => expires} ->
put_req_header(conn, "(expires)", expires)
_ ->
conn
end
end
defp assign_valid_signature_on_route_aliases(conn, []), do: conn
defp assign_valid_signature_on_route_aliases(%{assigns: %{valid_signature: true}} = conn, _),
@ -55,6 +75,8 @@ defmodule Pleroma.Web.Plugs.HTTPSignaturePlug do
conn =
conn
|> put_req_header("(request-target)", request_target)
|> maybe_put_created_psudoheader()
|> maybe_put_expires_psudoheader()
|> case do
%{assigns: %{digest: digest}} = conn -> put_req_header(conn, "digest", digest)
conn -> conn

View file

@ -18,7 +18,6 @@ defmodule Pleroma.Web.Plugs.UserIsAdminPlug do
def call(conn, _) do
conn
|> IO.inspect()
|> render_error(:forbidden, "User is not an admin.")
|> halt()
end

View file

@ -0,0 +1,98 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2022 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Backfill do
use Pleroma.Workers.WorkerHelper,
queue: "rich_media_backfill",
unique: [period: 300, states: Oban.Job.states(), keys: [:op, :url_hash]]
alias Pleroma.Web.RichMedia.Card
alias Pleroma.Web.RichMedia.Parser
alias Pleroma.Web.RichMedia.Parser.TTL
alias Pleroma.Workers.RichMediaExpirationWorker
require Logger
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
def start(%{url: url} = args) when is_binary(url) do
url_hash = Card.url_to_hash(url)
args =
args
|> Map.put(:url_hash, url_hash)
__MODULE__.enqueue("rich_media_backfill", args)
end
def perform(%Oban.Job{args: %{"op" => "rich_media_backfill", "url" => url} = args})
when is_binary(url) do
run(args)
end
def run(%{"url" => url, "url_hash" => url_hash} = args) do
case Parser.parse(url) do
{:ok, fields} ->
{:ok, card} = Card.create(url, fields)
maybe_schedule_expiration(url, fields)
if Map.has_key?(args, "activity_id") do
stream_update(args)
end
warm_cache(url_hash, card)
:ok
{:error, {:invalid_metadata, fields}} ->
Logger.debug("Rich media incomplete or invalid metadata for #{url}: #{inspect(fields)}")
negative_cache(url_hash, :timer.minutes(30))
{:error, :body_too_large} ->
Logger.error("Rich media error for #{url}: :body_too_large")
negative_cache(url_hash, :timer.minutes(30))
{:error, {:content_type, type}} ->
Logger.debug("Rich media error for #{url}: :content_type is #{type}")
negative_cache(url_hash, :timer.minutes(30))
e ->
Logger.debug("Rich media error for #{url}: #{inspect(e)}")
{:error, e}
end
end
def run(e) do
Logger.error("Rich media failure - invalid args: #{inspect(e)}")
{:discard, :invalid}
end
defp maybe_schedule_expiration(url, fields) do
case TTL.process(fields, url) do
{:ok, ttl} when is_number(ttl) ->
timestamp = DateTime.from_unix!(ttl)
RichMediaExpirationWorker.new(%{"url" => url}, scheduled_at: timestamp)
|> Oban.insert()
_ ->
:ok
end
end
defp stream_update(%{"activity_id" => activity_id}) do
Logger.info("Rich media backfill: streaming update for activity #{activity_id}")
Pleroma.Activity.get_by_id(activity_id)
|> Pleroma.Activity.normalize()
|> Pleroma.Web.ActivityPub.ActivityPub.stream_out()
end
defp warm_cache(key, val), do: @cachex.put(:rich_media_cache, key, val)
def negative_cache(key, ttl \\ :timer.minutes(30)) do
@cachex.put(:rich_media_cache, key, nil, ttl: ttl)
{:discard, :error}
end
end

View file

@ -0,0 +1,149 @@
defmodule Pleroma.Web.RichMedia.Card do
use Ecto.Schema
import Ecto.Changeset
import Ecto.Query
alias Pleroma.Activity
alias Pleroma.HTML
alias Pleroma.Object
alias Pleroma.Repo
alias Pleroma.Web.RichMedia.Backfill
alias Pleroma.Web.RichMedia.Parser
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
@config_impl Application.compile_env(:pleroma, [__MODULE__, :config_impl], Pleroma.Config)
@type t :: %__MODULE__{}
schema "rich_media_card" do
field(:url_hash, :binary)
field(:fields, :map)
timestamps()
end
@doc false
def changeset(card, attrs) do
card
|> cast(attrs, [:url_hash, :fields])
|> validate_required([:url_hash, :fields])
|> unique_constraint(:url_hash)
end
@spec create(String.t(), map()) :: {:ok, t()}
def create(url, fields) do
url_hash = url_to_hash(url)
fields = Map.put_new(fields, "url", url)
%__MODULE__{}
|> changeset(%{url_hash: url_hash, fields: fields})
|> Repo.insert(on_conflict: {:replace, [:fields]}, conflict_target: :url_hash)
end
@spec delete(String.t()) :: {:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()} | :ok
def delete(url) do
url_hash = url_to_hash(url)
@cachex.del(:rich_media_cache, url_hash)
case get_by_url(url) do
%__MODULE__{} = card -> Repo.delete(card)
nil -> :ok
end
end
@spec get_by_url(String.t() | nil) :: t() | nil | :error
def get_by_url(url) when is_binary(url) do
if @config_impl.get([:rich_media, :enabled]) do
url_hash = url_to_hash(url)
@cachex.fetch!(:rich_media_cache, url_hash, fn _ ->
result =
__MODULE__
|> where(url_hash: ^url_hash)
|> Repo.one()
case result do
%__MODULE__{} = card -> {:commit, card}
_ -> {:ignore, nil}
end
end)
else
:error
end
end
def get_by_url(nil), do: nil
@spec get_or_backfill_by_url(String.t(), map()) :: t() | nil
def get_or_backfill_by_url(url, backfill_opts \\ %{}) do
case get_by_url(url) do
%__MODULE__{} = card ->
card
nil ->
backfill_opts = Map.put(backfill_opts, :url, url)
Backfill.start(backfill_opts)
nil
:error ->
nil
end
end
@spec get_by_activity(Activity.t()) :: t() | nil | :error
# Fake/Draft activity
def get_by_activity(%Activity{id: "pleroma:fakeid"} = activity) do
with %Object{} = object <- Object.normalize(activity, fetch: false),
url when not is_nil(url) <- HTML.extract_first_external_url_from_object(object) do
case get_by_url(url) do
# Cache hit
%__MODULE__{} = card ->
card
# Cache miss, but fetch for rendering the Draft
_ ->
with {:ok, fields} <- Parser.parse(url),
{:ok, card} <- create(url, fields) do
card
else
_ -> nil
end
end
else
_ ->
nil
end
end
def get_by_activity(activity) do
with %Object{} = object <- Object.normalize(activity, fetch: false),
{_, nil} <- {:cached, get_cached_url(object, activity.id)} do
nil
else
{:cached, url} ->
get_or_backfill_by_url(url, %{activity_id: activity.id})
_ ->
:error
end
end
@spec url_to_hash(String.t()) :: String.t()
def url_to_hash(url) do
:crypto.hash(:sha256, url) |> Base.encode16(case: :lower)
end
defp get_cached_url(object, activity_id) do
key = "URL|#{activity_id}"
@cachex.fetch!(:scrubber_cache, key, fn _ ->
url = HTML.extract_first_external_url_from_object(object)
Activity.HTML.add_cache_key_for(activity_id, key)
{:commit, url}
end)
end
end

View file

@ -3,85 +3,13 @@
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Helpers do
alias Pleroma.Activity
alias Pleroma.Config
alias Pleroma.HTML
alias Pleroma.Object
alias Pleroma.Web.RichMedia.Parser
@options [
max_body: 2_000_000,
receive_timeout: 2_000
]
@spec validate_page_url(URI.t() | binary()) :: :ok | :error
defp validate_page_url(page_url) when is_binary(page_url) do
validate_tld = Config.get([Pleroma.Formatter, :validate_tld])
page_url
|> Linkify.Parser.url?(validate_tld: validate_tld)
|> parse_uri(page_url)
end
defp validate_page_url(%URI{host: host, scheme: "https", authority: authority})
when is_binary(authority) do
cond do
host in Config.get([:rich_media, :ignore_hosts], []) ->
:error
get_tld(host) in Config.get([:rich_media, :ignore_tld], []) ->
:error
true ->
:ok
end
end
defp validate_page_url(_), do: :error
defp parse_uri(true, url) do
url
|> URI.parse()
|> validate_page_url
end
defp parse_uri(_, _), do: :error
defp get_tld(host) do
host
|> String.split(".")
|> Enum.reverse()
|> hd
end
def fetch_data_for_object(object) do
with true <- Config.get([:rich_media, :enabled]),
{:ok, page_url} <-
HTML.extract_first_external_url_from_object(object),
:ok <- validate_page_url(page_url),
{:ok, rich_media} <- Parser.parse(page_url) do
%{page_url: page_url, rich_media: rich_media}
else
_ -> %{}
end
end
def fetch_data_for_activity(%Activity{data: %{"type" => "Create"}} = activity) do
with true <- Config.get([:rich_media, :enabled]),
%Object{} = object <- Object.normalize(activity, fetch: false) do
fetch_data_for_object(object)
else
_ -> %{}
end
end
def fetch_data_for_activity(_), do: %{}
def rich_media_get(url) do
headers = [{"user-agent", Pleroma.Application.user_agent() <> "; Bot"}]
head_check =
case Pleroma.HTTP.head(url, headers, @options) do
case Pleroma.HTTP.head(url, headers, http_options()) do
# If the HEAD request didn't reach the server for whatever reason,
# we assume the GET that comes right after won't either
{:error, _} = e ->
@ -96,7 +24,7 @@ defmodule Pleroma.Web.RichMedia.Helpers do
:ok
end
with :ok <- head_check, do: Pleroma.HTTP.get(url, headers, @options)
with :ok <- head_check, do: Pleroma.HTTP.get(url, headers, http_options())
end
defp check_content_type(headers) do
@ -112,12 +40,13 @@ defmodule Pleroma.Web.RichMedia.Helpers do
end
end
@max_body @options[:max_body]
defp check_content_length(headers) do
max_body = Keyword.get(http_options(), :max_body)
case List.keyfind(headers, "content-length", 0) do
{_, maybe_content_length} ->
case Integer.parse(maybe_content_length) do
{content_length, ""} when content_length <= @max_body -> :ok
{content_length, ""} when content_length <= max_body -> :ok
{_, ""} -> {:error, :body_too_large}
_ -> :ok
end
@ -126,4 +55,11 @@ defmodule Pleroma.Web.RichMedia.Helpers do
:ok
end
end
defp http_options do
[
pool: :media,
max_body: Config.get([:rich_media, :max_body], 5_000_000)
]
end
end

View file

@ -1,161 +1,41 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# Copyright © 2017-2022 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Parser do
require Logger
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
@config_impl Application.compile_env(:pleroma, [__MODULE__, :config_impl], Pleroma.Config)
defp parsers do
Pleroma.Config.get([:rich_media, :parsers])
end
def parse(nil), do: {:error, "No URL provided"}
def parse(nil), do: nil
if Pleroma.Config.get(:env) == :test do
@spec parse(String.t()) :: {:ok, map()} | {:error, any()}
def parse(url), do: parse_with_timeout(url)
else
@spec parse(String.t()) :: {:ok, map()} | {:error, any()}
def parse(url) do
with {:ok, data} <- get_cached_or_parse(url),
{:ok, _} <- set_ttl_based_on_image(data, url) do
{:ok, data}
end
end
defp get_cached_or_parse(url) do
case @cachex.fetch(:rich_media_cache, url, fn ->
case parse_with_timeout(url) do
{:ok, _} = res ->
{:commit, res}
{:error, reason} = e ->
# Unfortunately we have to log errors here, instead of doing that
# along with ttl setting at the bottom. Otherwise we can get log spam
# if more than one process was waiting for the rich media card
# while it was generated. Ideally we would set ttl here as well,
# so we don't override it number_of_waiters_on_generation
# times, but one, obviously, can't set ttl for not-yet-created entry
# and Cachex doesn't support returning ttl from the fetch callback.
log_error(url, reason)
{:commit, e}
end
end) do
{action, res} when action in [:commit, :ok] ->
case res do
{:ok, _data} = res ->
res
{:error, reason} = e ->
if action == :commit, do: set_error_ttl(url, reason)
e
end
{:error, e} ->
{:error, {:cachex_error, e}}
end
end
defp set_error_ttl(_url, :body_too_large), do: :ok
defp set_error_ttl(_url, {:content_type, _}), do: :ok
# The TTL is not set for the errors above, since they are unlikely to change
# with time
defp set_error_ttl(url, _reason) do
ttl = Pleroma.Config.get([:rich_media, :failure_backoff], 60_000)
@cachex.expire(:rich_media_cache, url, ttl)
:ok
end
defp log_error(url, {:invalid_metadata, data}) do
Logger.debug(fn -> "Incomplete or invalid metadata for #{url}: #{inspect(data)}" end)
end
defp log_error(url, reason) do
Logger.warning(fn -> "Rich media error for #{url}: #{inspect(reason)}" end)
@spec parse(String.t()) :: {:ok, map()} | {:error, any()}
def parse(url) do
with {_, true} <- {:config, @config_impl.get([:rich_media, :enabled])},
:ok <- validate_page_url(url),
{:ok, data} <- parse_url(url) do
data = Map.put(data, "url", url)
{:ok, data}
else
{:config, _} -> {:error, :rich_media_disabled}
e -> e
end
end
@doc """
Set the rich media cache based on the expiration time of image.
Adopt behaviour `Pleroma.Web.RichMedia.Parser.TTL`
## Example
defmodule MyModule do
@behaviour Pleroma.Web.RichMedia.Parser.TTL
def ttl(data, url) do
image_url = Map.get(data, :image)
# do some parsing in the url and get the ttl of the image
# and return ttl is unix time
parse_ttl_from_url(image_url)
end
end
Define the module in the config
config :pleroma, :rich_media,
ttl_setters: [MyModule]
"""
@spec set_ttl_based_on_image(map(), String.t()) ::
{:ok, Integer.t() | :noop} | {:error, :no_key}
def set_ttl_based_on_image(data, url) do
case get_ttl_from_image(data, url) do
{:ok, ttl} when is_number(ttl) ->
ttl = ttl * 1000
case @cachex.expire_at(:rich_media_cache, url, ttl) do
{:ok, true} -> {:ok, ttl}
{:ok, false} -> {:error, :no_key}
end
_ ->
{:ok, :noop}
end
end
defp get_ttl_from_image(data, url) do
[:rich_media, :ttl_setters]
|> Pleroma.Config.get()
|> Enum.reduce({:ok, nil}, fn
module, {:ok, _ttl} ->
module.ttl(data, url)
_, error ->
error
end)
end
def parse_url(url) do
defp parse_url(url) do
with {:ok, %Tesla.Env{body: html}} <- Pleroma.Web.RichMedia.Helpers.rich_media_get(url),
{:ok, html} <- Floki.parse_document(html) do
html
|> maybe_parse()
|> Map.put("url", url)
|> clean_parsed_data()
|> check_parsed_data()
end
end
def parse_with_timeout(url) do
try do
task =
Task.Supervisor.async_nolink(Pleroma.TaskSupervisor, fn ->
parse_url(url)
end)
Task.await(task, 5000)
catch
:exit, {:timeout, _} ->
Logger.warning("Timeout while fetching rich media for #{url}")
{:error, :timeout}
end
end
defp maybe_parse(html) do
Enum.reduce_while(parsers(), %{}, fn parser, acc ->
case parser.parse(html, acc) do
@ -181,4 +61,46 @@ defmodule Pleroma.Web.RichMedia.Parser do
end)
|> Map.new()
end
@spec validate_page_url(URI.t() | binary()) :: :ok | :error
defp validate_page_url(page_url) when is_binary(page_url) do
validate_tld = @config_impl.get([Pleroma.Formatter, :validate_tld])
page_url
|> Linkify.Parser.url?(validate_tld: validate_tld)
|> parse_uri(page_url)
end
defp validate_page_url(%URI{host: host, scheme: "https"}) do
cond do
Linkify.Parser.ip?(host) ->
:error
host in @config_impl.get([:rich_media, :ignore_hosts], []) ->
:error
get_tld(host) in @config_impl.get([:rich_media, :ignore_tld], []) ->
:error
true ->
:ok
end
end
defp validate_page_url(_), do: :error
defp parse_uri(true, url) do
url
|> URI.parse()
|> validate_page_url
end
defp parse_uri(_, _), do: :error
defp get_tld(host) do
host
|> String.split(".")
|> Enum.reverse()
|> hd
end
end

View file

@ -3,5 +3,18 @@
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Parser.TTL do
@callback ttl(Map.t(), String.t()) :: Integer.t() | nil
@callback ttl(map(), String.t()) :: integer() | nil
@spec process(map(), String.t()) :: {:ok, integer() | nil}
def process(data, url) do
[:rich_media, :ttl_setters]
|> Pleroma.Config.get()
|> Enum.reduce_while({:ok, nil}, fn
module, acc ->
case module.ttl(data, url) do
ttl when is_number(ttl) -> {:halt, {:ok, ttl}}
_ -> {:cont, acc}
end
end)
end
end

View file

@ -1,5 +1,5 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# Copyright © 2017-2022 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl do
@ -7,25 +7,26 @@ defmodule Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl do
@impl true
def ttl(data, _url) do
image = Map.get(data, :image)
image = Map.get(data, "image")
if is_aws_signed_url(image) do
if aws_signed_url?(image) do
image
|> parse_query_params()
|> format_query_params()
|> get_expiration_timestamp()
else
{:error, "Not aws signed url #{inspect(image)}"}
nil
end
end
defp is_aws_signed_url(image) when is_binary(image) and image != "" do
defp aws_signed_url?(image) when is_binary(image) and image != "" do
%URI{host: host, query: query} = URI.parse(image)
String.contains?(host, "amazonaws.com") and String.contains?(query, "X-Amz-Expires")
is_binary(host) and String.contains?(host, "amazonaws.com") and
is_binary(query) and String.contains?(query, "X-Amz-Expires")
end
defp is_aws_signed_url(_), do: nil
defp aws_signed_url?(_), do: nil
defp parse_query_params(image) do
%URI{query: query} = URI.parse(image)
@ -45,6 +46,6 @@ defmodule Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl do
|> Map.get("X-Amz-Date")
|> Timex.parse("{ISO:Basic:Z}")
{:ok, Timex.to_unix(date) + String.to_integer(Map.get(params, "X-Amz-Expires"))}
Timex.to_unix(date) + String.to_integer(Map.get(params, "X-Amz-Expires"))
end
end

View file

@ -0,0 +1,20 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2022 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Parser.TTL.Opengraph do
@behaviour Pleroma.Web.RichMedia.Parser.TTL
@impl true
def ttl(%{"ttl" => ttl_string}, _url) when is_binary(ttl_string) do
try do
ttl = String.to_integer(ttl_string)
now = DateTime.utc_now() |> DateTime.to_unix()
now + ttl
rescue
_ -> nil
end
end
def ttl(_, _), do: nil
end

View file

@ -1,5 +1,5 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# Copyright © 2017-2022 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Web.RichMedia.Parsers.OEmbed do

View file

@ -101,15 +101,6 @@ defmodule Pleroma.Web.Router do
plug(Pleroma.Web.Plugs.IdempotencyPlug)
end
pipeline :admin_interface do
plug(:browser)
plug(:fetch_session)
plug(:authenticate)
plug(:fetch_flash)
plug(Pleroma.Web.Plugs.EnsureUserTokenAssignsPlug)
plug(Pleroma.Web.Plugs.UserIsAdminPlug)
end
pipeline :require_privileged_staff do
plug(Pleroma.Web.Plugs.EnsureStaffPrivilegedPlug)
end
@ -482,12 +473,6 @@ defmodule Pleroma.Web.Router do
post("/frontend", FrontendSwitcherController, :do_switch)
end
scope "/akkoma/admin/", Pleroma.Web.AdminControl do
pipe_through(:admin_interface)
get("/", AdminControlController, :index)
end
scope "/api/v1/akkoma", Pleroma.Web.AkkomaAPI do
pipe_through(:api)

View file

@ -9,13 +9,13 @@
xmlns:ostatus="http://ostatus.org/schema/1.0"
xmlns:statusnet="http://status.net/schema/api/1/">
<id><%= '#{url(~p"/tags/#{@tag}")}.rss' %></id>
<id><%= "#{url(~p"/tags/#{@tag}")}.rss" %></id>
<title>#<%= @tag %></title>
<subtitle><%= Gettext.dpgettext("static_pages", "tag feed description", "These are public toots tagged with #%{tag}. You can interact with them if you have an account anywhere in the fediverse.", tag: @tag) %></subtitle>
<logo><%= feed_logo() %></logo>
<updated><%= most_recent_update(@activities) %></updated>
<link rel="self" href="<%= '#{url(~p"/tags/#{@tag}")}.atom' %>" type="application/atom+xml"/>
<link rel="self" href="<%= "#{url(~p"/tags/#{@tag}")}.atom" %>" type="application/atom+xml"/>
<%= for activity <- @activities do %>
<%= render @view_module, "_tag_activity.atom", Map.merge(assigns, prepare_activity(activity, actor: true)) %>
<% end %>

View file

@ -5,7 +5,7 @@
<title>#<%= @tag %></title>
<description><%= Gettext.dpgettext("static_pages", "tag feed description", "These are public toots tagged with #%{tag}. You can interact with them if you have an account anywhere in the fediverse.", tag: @tag) %></description>
<link><%= '#{url(~p"/tags/#{@tag}")}.rss' %></link>
<link><%= "#{url(~p"/tags/#{@tag}")}.rss" %></link>
<webfeeds:logo><%= feed_logo() %></webfeeds:logo>
<webfeeds:accentColor>2b90d9</webfeeds:accentColor>
<%= for activity <- @activities do %>

View file

@ -10,12 +10,12 @@
<title><%= @user.nickname <> "'s timeline" %></title>
<updated><%= most_recent_update(@activities, @user) %></updated>
<logo><%= logo(@user) %></logo>
<link rel="self" href="<%= '#{url(~p"/users/#{@user.nickname}/feed")}.atom' %>" type="application/atom+xml"/>
<link rel="self" href="<%= "#{url(~p"/users/#{@user.nickname}/feed")}.atom" %>" type="application/atom+xml"/>
<%= render @view_module, "_author.atom", assigns %>
<%= if last_activity(@activities) do %>
<link rel="next" href="<%= '#{url(~p"/users/#{@user.nickname}/feed")}.atom?max_id=#{last_activity(@activities).id}' %>" type="application/atom+xml"/>
<link rel="next" href="<%= "#{url(~p"/users/#{@user.nickname}/feed")}.atom?max_id=#{last_activity(@activities).id}" %>" type="application/atom+xml"/>
<% end %>
<%= for activity <- @activities do %>

View file

@ -5,12 +5,12 @@
<title><%= @user.nickname <> "'s timeline" %></title>
<updated><%= most_recent_update(@activities, @user) %></updated>

<link><%= '#{url(~p"/users/#{@user.nickname}/feed")}.rss' %></link>
<link><%= "#{url(~p"/users/#{@user.nickname}/feed")}.rss" %></link>
<%= render @view_module, "_author.rss", assigns %>
<%= if last_activity(@activities) do %>
<link rel="next"><%= '#{url(~p"/users/#{@user.nickname}/feed")}.rss?max_id=#{last_activity(@activities).id}' %></link>
<link rel="next"><%= "#{url(~p"/users/#{@user.nickname}/feed")}.rss?max_id=#{last_activity(@activities).id}" %></link>
<% end %>
<%= for activity <- @activities do %>

View file

@ -156,11 +156,21 @@ defmodule Pleroma.Web.WebFinger do
end
end
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
def find_lrdd_template(domain) do
@cachex.fetch!(:host_meta_cache, domain, fn _ ->
{:commit, fetch_lrdd_template(domain)}
end)
rescue
e -> {:error, "Cachex error: #{inspect(e)}"}
end
defp fetch_lrdd_template(domain) do
# WebFinger is restricted to HTTPS - https://tools.ietf.org/html/rfc7033#section-9.1
meta_url = "https://#{domain}/.well-known/host-meta"
with {:ok, %{status: status, body: body}} when status in 200..299 <- HTTP.get(meta_url) do
with {:ok, %{status: status, body: body}} when status in 200..299 <-
HTTP.Backoff.get(meta_url) do
get_template_from_xml(body)
else
error ->
@ -169,7 +179,7 @@ defmodule Pleroma.Web.WebFinger do
end
end
defp get_address_from_domain(domain, encoded_account) when is_binary(domain) do
defp get_address_from_domain(domain, "acct:" <> _ = encoded_account) when is_binary(domain) do
case find_lrdd_template(domain) do
{:ok, template} ->
String.replace(template, "{uri}", encoded_account)
@ -179,6 +189,11 @@ defmodule Pleroma.Web.WebFinger do
end
end
defp get_address_from_domain(domain, account) when is_binary(domain) do
encoded_account = URI.encode("acct:#{account}")
get_address_from_domain(domain, encoded_account)
end
defp get_address_from_domain(_, _), do: {:error, :webfinger_no_domain}
@spec finger(String.t()) :: {:ok, map()} | {:error, any()}
@ -193,11 +208,9 @@ defmodule Pleroma.Web.WebFinger do
URI.parse(account).host
end
encoded_account = URI.encode("acct:#{account}")
with address when is_binary(address) <- get_address_from_domain(domain, encoded_account),
with address when is_binary(address) <- get_address_from_domain(domain, account),
{:ok, %{status: status, body: body, headers: headers}} when status in 200..299 <-
HTTP.get(
HTTP.Backoff.get(
address,
[{"accept", "application/xrd+xml,application/jrd+json"}]
) do
@ -217,10 +230,28 @@ defmodule Pleroma.Web.WebFinger do
_ ->
{:error, {:content_type, nil}}
end
|> case do
{:ok, data} -> validate_webfinger(address, data)
error -> error
end
else
error ->
Logger.debug("Couldn't finger #{account}: #{inspect(error)}")
error
end
end
defp validate_webfinger(request_url, %{"subject" => "acct:" <> acct = subject} = data) do
with [_name, acct_host] <- String.split(acct, "@"),
{_, url} <- {:address, get_address_from_domain(acct_host, subject)},
%URI{host: request_host} <- URI.parse(request_url),
%URI{host: acct_host} <- URI.parse(url),
{_, true} <- {:hosts_match, acct_host == request_host} do
{:ok, data}
else
_ -> {:error, {:webfinger_invalid, request_url, data}}
end
end
defp validate_webfinger(url, data), do: {:error, {:webfinger_invalid, url, data}}
end

View file

@ -0,0 +1,15 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2022 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Workers.RichMediaExpirationWorker do
alias Pleroma.Web.RichMedia.Card
use Oban.Worker,
queue: :rich_media_expiration
@impl Oban.Worker
def perform(%Job{args: %{"url" => url} = _args}) do
Card.delete(url)
end
end

29
mix.exs
View file

@ -4,7 +4,7 @@ defmodule Pleroma.Mixfile do
def project do
[
app: :pleroma,
version: version("3.13.1"),
version: version("3.13.2"),
elixir: "~> 1.14",
elixirc_paths: elixirc_paths(Mix.env()),
compilers: Mix.compilers(),
@ -78,7 +78,8 @@ defmodule Pleroma.Mixfile do
:comeonin,
:fast_sanitize,
:os_mon,
:ssl
:ssl,
:recon
],
included_applications: [:ex_syslogger]
]
@ -116,16 +117,7 @@ defmodule Pleroma.Mixfile do
[
{:phoenix, "~> 1.7.0"},
{:phoenix_view, "~> 2.0"},
{:phoenix_live_dashboard, "~> 0.8.3"},
{:esbuild, "~> 0.8", runtime: Mix.env() == :dev},
{:tailwind, "~> 0.2", runtime: Mix.env() == :dev},
{:heroicons,
github: "tailwindlabs/heroicons",
tag: "v2.1.1",
sparse: "optimized",
app: false,
compile: false,
depth: 1},
{:phoenix_live_dashboard, "~> 0.7.2"},
{:tzdata, "~> 1.1.1"},
{:plug_cowboy, "~> 2.6"},
{:phoenix_pubsub, "~> 2.1"},
@ -145,7 +137,7 @@ defmodule Pleroma.Mixfile do
{:tesla, "~> 1.7"},
{:castore, "~> 1.0"},
{:cowlib, "~> 2.12"},
{:finch, "~> 0.16.0"},
{:finch, "~> 0.18.0"},
{:jason, "~> 1.4"},
{:trailing_format_plug, "~> 0.0.7"},
{:mogrify, "~> 0.9"},
@ -166,10 +158,10 @@ defmodule Pleroma.Mixfile do
{:floki, "~> 0.34"},
{:timex, "~> 3.7"},
{:ueberauth, "== 0.10.5"},
{:linkify, git: "https://akkoma.dev/AkkomaGang/linkify.git"},
{:linkify, "~> 0.5.3"},
{:http_signatures,
git: "https://akkoma.dev/AkkomaGang/http_signatures.git",
ref: "6640ce7d24c783ac2ef56e27d00d12e8dc85f396"},
ref: "d44c43d66758c6a73eaa4da9cffdbee0c5da44ae"},
{:telemetry, "~> 1.2"},
{:telemetry_poller, "~> 1.0"},
{:telemetry_metrics, "~> 0.6"},
@ -233,13 +225,6 @@ defmodule Pleroma.Mixfile do
"ecto.rollback": ["pleroma.ecto.rollback"],
"ecto.setup": ["ecto.create", "ecto.migrate", "run priv/repo/seeds.exs"],
"ecto.reset": ["ecto.drop", "ecto.setup"],
"assets.setup": ["tailwind.install --if-missing", "esbuild.install --if-missing"],
"assets.build": ["tailwind pleroma", "esbuild pleroma"],
"assets.deploy": [
"tailwind pleroma --minify",
"esbuild pleroma --minify",
"phx.digest"
],
test: ["ecto.create --quiet", "ecto.migrate", "test"],
docs: ["pleroma.docs", "docs"],
analyze: ["credo --strict --only=warnings,todo,fixme,consistency,readability"],

View file

@ -3,7 +3,7 @@
"base62": {:hex, :base62, "1.2.2", "85c6627eb609317b70f555294045895ffaaeb1758666ab9ef9ca38865b11e629", [:mix], [{:custom_base, "~> 0.2.1", [hex: :custom_base, repo: "hexpm", optional: false]}], "hexpm", "d41336bda8eaa5be197f1e4592400513ee60518e5b9f4dcf38f4b4dae6f377bb"},
"bbcode_pleroma": {:hex, :bbcode_pleroma, "0.2.0", "d36f5bca6e2f62261c45be30fa9b92725c0655ad45c99025cb1c3e28e25803ef", [:mix], [{:nimble_parsec, "~> 0.5", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "19851074419a5fedb4ef49e1f01b30df504bb5dbb6d6adfc135238063bebd1c3"},
"bcrypt_elixir": {:hex, :bcrypt_elixir, "3.0.1", "9be815469e6bfefec40fa74658ecbbe6897acfb57614df1416eeccd4903f602c", [:make, :mix], [{:comeonin, "~> 5.3", [hex: :comeonin, repo: "hexpm", optional: false]}, {:elixir_make, "~> 0.6", [hex: :elixir_make, repo: "hexpm", optional: false]}], "hexpm", "486bb95efb645d1efc6794c1ddd776a186a9a713abf06f45708a6ce324fb96cf"},
"benchee": {:hex, :benchee, "1.3.0", "f64e3b64ad3563fa9838146ddefb2d2f94cf5b473bdfd63f5ca4d0657bf96694", [:mix], [{:deep_merge, "~> 1.0", [hex: :deep_merge, repo: "hexpm", optional: false]}, {:statistex, "~> 1.0", [hex: :statistex, repo: "hexpm", optional: false]}, {:table, "~> 0.1.0", [hex: :table, repo: "hexpm", optional: true]}], "hexpm", "34f4294068c11b2bd2ebf2c59aac9c7da26ffa0068afdf3419f1b176e16c5f81"},
"benchee": {:hex, :benchee, "1.3.1", "c786e6a76321121a44229dde3988fc772bca73ea75170a73fd5f4ddf1af95ccf", [:mix], [{:deep_merge, "~> 1.0", [hex: :deep_merge, repo: "hexpm", optional: false]}, {:statistex, "~> 1.0", [hex: :statistex, repo: "hexpm", optional: false]}, {:table, "~> 0.1.0", [hex: :table, repo: "hexpm", optional: true]}], "hexpm", "76224c58ea1d0391c8309a8ecbfe27d71062878f59bd41a390266bf4ac1cc56d"},
"bunt": {:hex, :bunt, "1.0.0", "081c2c665f086849e6d57900292b3a161727ab40431219529f13c4ddcf3e7a44", [:mix], [], "hexpm", "dc5f86aa08a5f6fa6b8096f0735c4e76d54ae5c9fa2c143e5a1fc7c1cd9bb6b5"},
"cachex": {:hex, :cachex, "3.6.0", "14a1bfbeee060dd9bec25a5b6f4e4691e3670ebda28c8ba2884b12fe30b36bf8", [:mix], [{:eternal, "~> 1.2", [hex: :eternal, repo: "hexpm", optional: false]}, {:jumper, "~> 1.0", [hex: :jumper, repo: "hexpm", optional: false]}, {:sleeplocks, "~> 1.1", [hex: :sleeplocks, repo: "hexpm", optional: false]}, {:unsafe, "~> 1.0", [hex: :unsafe, repo: "hexpm", optional: false]}], "hexpm", "ebf24e373883bc8e0c8d894a63bbe102ae13d918f790121f5cfe6e485cc8e2e2"},
"calendar": {:hex, :calendar, "1.0.0", "f52073a708528482ec33d0a171954ca610fe2bd28f1e871f247dc7f1565fa807", [:mix], [{:tzdata, "~> 0.1.201603 or ~> 0.5.20 or ~> 1.0", [hex: :tzdata, repo: "hexpm", optional: false]}], "hexpm", "990e9581920c82912a5ee50e62ff5ef96da6b15949a2ee4734f935fdef0f0a6f"},
@ -18,7 +18,7 @@
"cowboy": {:hex, :cowboy, "2.12.0", "f276d521a1ff88b2b9b4c54d0e753da6c66dd7be6c9fca3d9418b561828a3731", [:make, :rebar3], [{:cowlib, "2.13.0", [hex: :cowlib, repo: "hexpm", optional: false]}, {:ranch, "1.8.0", [hex: :ranch, repo: "hexpm", optional: false]}], "hexpm", "8a7abe6d183372ceb21caa2709bec928ab2b72e18a3911aa1771639bef82651e"},
"cowboy_telemetry": {:hex, :cowboy_telemetry, "0.4.0", "f239f68b588efa7707abce16a84d0d2acf3a0f50571f8bb7f56a15865aae820c", [:rebar3], [{:cowboy, "~> 2.7", [hex: :cowboy, repo: "hexpm", optional: false]}, {:telemetry, "~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "7d98bac1ee4565d31b62d59f8823dfd8356a169e7fcbb83831b8a5397404c9de"},
"cowlib": {:hex, :cowlib, "2.13.0", "db8f7505d8332d98ef50a3ef34b34c1afddec7506e4ee4dd4a3a266285d282ca", [:make, :rebar3], [], "hexpm", "e1e1284dc3fc030a64b1ad0d8382ae7e99da46c3246b815318a4b848873800a4"},
"credo": {:hex, :credo, "1.7.5", "643213503b1c766ec0496d828c90c424471ea54da77c8a168c725686377b9545", [:mix], [{:bunt, "~> 0.2.1 or ~> 1.0", [hex: :bunt, repo: "hexpm", optional: false]}, {:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "f799e9b5cd1891577d8c773d245668aa74a2fcd15eb277f51a0131690ebfb3fd"},
"credo": {:hex, :credo, "1.7.7", "771445037228f763f9b2afd612b6aa2fd8e28432a95dbbc60d8e03ce71ba4446", [:mix], [{:bunt, "~> 0.2.1 or ~> 1.0", [hex: :bunt, repo: "hexpm", optional: false]}, {:file_system, "~> 0.2 or ~> 1.0", [hex: :file_system, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "8bc87496c9aaacdc3f90f01b7b0582467b69b4bd2441fe8aae3109d843cc2f2e"},
"custom_base": {:hex, :custom_base, "0.2.1", "4a832a42ea0552299d81652aa0b1f775d462175293e99dfbe4d7dbaab785a706", [:mix], [], "hexpm", "8df019facc5ec9603e94f7270f1ac73ddf339f56ade76a721eaa57c1493ba463"},
"db_connection": {:hex, :db_connection, "2.6.0", "77d835c472b5b67fc4f29556dee74bf511bbafecdcaf98c27d27fa5918152086", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "c2f992d15725e721ec7fbc1189d4ecdb8afef76648c746a8e1cad35e3b8a35f3"},
"decimal": {:hex, :decimal, "2.1.1", "5611dca5d4b2c3dd497dec8f68751f1f1a54755e8ed2a966c2633cf885973ad6", [:mix], [], "hexpm", "53cfe5f497ed0e7771ae1a475575603d77425099ba5faef9394932b35020ffcc"},
@ -29,19 +29,18 @@
"eblurhash": {:hex, :eblurhash, "1.2.2", "7da4255aaea984b31bb71155f673257353b0e0554d0d30dcf859547e74602582", [:rebar3], [], "hexpm", "8c20ca00904de023a835a9dcb7b7762fed32264c85a80c3cafa85288e405044c"},
"ecto": {:hex, :ecto, "3.10.3", "eb2ae2eecd210b4eb8bece1217b297ad4ff824b4384c0e3fdd28aaf96edd6135", [:mix], [{:decimal, "~> 1.6 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "44bec74e2364d491d70f7e42cd0d690922659d329f6465e89feb8a34e8cd3433"},
"ecto_enum": {:hex, :ecto_enum, "1.4.0", "d14b00e04b974afc69c251632d1e49594d899067ee2b376277efd8233027aec8", [:mix], [{:ecto, ">= 3.0.0", [hex: :ecto, repo: "hexpm", optional: false]}, {:ecto_sql, "> 3.0.0", [hex: :ecto_sql, repo: "hexpm", optional: false]}, {:mariaex, ">= 0.0.0", [hex: :mariaex, repo: "hexpm", optional: true]}, {:postgrex, ">= 0.0.0", [hex: :postgrex, repo: "hexpm", optional: true]}], "hexpm", "8fb55c087181c2b15eee406519dc22578fa60dd82c088be376d0010172764ee4"},
"ecto_psql_extras": {:hex, :ecto_psql_extras, "0.7.15", "0fc29dbae0e444a29bd6abeee4cf3c4c037e692a272478a234a1cc765077dbb1", [:mix], [{:ecto_sql, "~> 3.7", [hex: :ecto_sql, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.16.0 or ~> 0.17.0", [hex: :postgrex, repo: "hexpm", optional: false]}, {:table_rex, "~> 3.1.1 or ~> 4.0.0", [hex: :table_rex, repo: "hexpm", optional: false]}], "hexpm", "b6127f3a5c6fc3d84895e4768cc7c199f22b48b67d6c99b13fbf4a374e73f039"},
"ecto_psql_extras": {:hex, :ecto_psql_extras, "0.8.0", "440719cd74f09b3f01c84455707a2c3972b725c513808e68eb6c5b0ab82bf523", [:mix], [{:ecto_sql, "~> 3.7", [hex: :ecto_sql, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.16.0 or ~> 0.17.0 or ~> 0.18.0", [hex: :postgrex, repo: "hexpm", optional: false]}, {:table_rex, "~> 3.1.1 or ~> 4.0.0", [hex: :table_rex, repo: "hexpm", optional: false]}], "hexpm", "f1512812dc196bcb932a96c82e55f69b543dc125e9d39f5e3631a9c4ec65ef12"},
"ecto_sql": {:hex, :ecto_sql, "3.10.2", "6b98b46534b5c2f8b8b5f03f126e75e2a73c64f3c071149d32987a5378b0fdbd", [:mix], [{:db_connection, "~> 2.4.1 or ~> 2.5", [hex: :db_connection, repo: "hexpm", optional: false]}, {:ecto, "~> 3.10.0", [hex: :ecto, repo: "hexpm", optional: false]}, {:myxql, "~> 0.6.0", [hex: :myxql, repo: "hexpm", optional: true]}, {:postgrex, "~> 0.16.0 or ~> 0.17.0 or ~> 1.0", [hex: :postgrex, repo: "hexpm", optional: true]}, {:tds, "~> 2.1.1 or ~> 2.2", [hex: :tds, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.0 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "68c018debca57cb9235e3889affdaec7a10616a4e3a80c99fa1d01fdafaa9007"},
"elasticsearch": {:git, "https://akkoma.dev/AkkomaGang/elasticsearch-elixir.git", "6cd946f75f6ab9042521a009d1d32d29a90113ca", [ref: "main"]},
"elixir_make": {:hex, :elixir_make, "0.8.3", "d38d7ee1578d722d89b4d452a3e36bcfdc644c618f0d063b874661876e708683", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:certifi, "~> 2.0", [hex: :certifi, repo: "hexpm", optional: true]}], "hexpm", "5c99a18571a756d4af7a4d89ca75c28ac899e6103af6f223982f09ce44942cc9"},
"elixir_make": {:hex, :elixir_make, "0.8.4", "4960a03ce79081dee8fe119d80ad372c4e7badb84c493cc75983f9d3bc8bde0f", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:certifi, "~> 2.0", [hex: :certifi, repo: "hexpm", optional: true]}], "hexpm", "6e7f1d619b5f61dfabd0a20aa268e575572b542ac31723293a4c1a567d5ef040"},
"elixir_xml_to_map": {:hex, :elixir_xml_to_map, "3.1.0", "4d6260486a8cce59e4bf3575fe2dd2a24766546ceeef9f93fcec6f7c62a2827a", [:mix], [{:erlsom, "~> 1.4", [hex: :erlsom, repo: "hexpm", optional: false]}], "hexpm", "8fe5f2e75f90bab07ee2161120c2dc038ebcae8135554f5582990f1c8c21f911"},
"erlex": {:hex, :erlex, "0.2.6", "c7987d15e899c7a2f34f5420d2a2ea0d659682c06ac607572df55a43753aa12e", [:mix], [], "hexpm", "2ed2e25711feb44d52b17d2780eabf998452f6efda104877a3881c2f8c0c0c75"},
"erlsom": {:hex, :erlsom, "1.5.1", "c8fe2babd33ff0846403f6522328b8ab676f896b793634cfe7ef181c05316c03", [:rebar3], [], "hexpm", "7965485494c5844dd127656ac40f141aadfa174839ec1be1074e7edf5b4239eb"},
"esbuild": {:hex, :esbuild, "0.8.1", "0cbf919f0eccb136d2eeef0df49c4acf55336de864e63594adcea3814f3edf41", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}, {:jason, "~> 1.4", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "25fc876a67c13cb0a776e7b5d7974851556baeda2085296c14ab48555ea7560f"},
"eternal": {:hex, :eternal, "1.2.2", "d1641c86368de99375b98d183042dd6c2b234262b8d08dfd72b9eeaafc2a1abd", [:mix], [], "hexpm", "2c9fe32b9c3726703ba5e1d43a1d255a4f3f2d8f8f9bc19f094c7cb1a7a9e782"},
"ex_aws": {:hex, :ex_aws, "2.5.3", "9c2d05ba0c057395b12c7b5ca6267d14cdaec1d8e65bdf6481fe1fd245accfb4", [:mix], [{:configparser_ex, "~> 4.0", [hex: :configparser_ex, repo: "hexpm", optional: true]}, {:hackney, "~> 1.16", [hex: :hackney, repo: "hexpm", optional: true]}, {:jason, "~> 1.1", [hex: :jason, repo: "hexpm", optional: true]}, {:jsx, "~> 2.8 or ~> 3.0", [hex: :jsx, repo: "hexpm", optional: true]}, {:mime, "~> 1.2 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:sweet_xml, "~> 0.7", [hex: :sweet_xml, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "67115f1d399d7ec4d191812ee565c6106cb4b1bbf19a9d4db06f265fd87da97e"},
"ex_aws": {:hex, :ex_aws, "2.5.4", "86c5bb870a49e0ab6f5aa5dd58cf505f09d2624ebe17530db3c1b61c88a673af", [:mix], [{:configparser_ex, "~> 4.0", [hex: :configparser_ex, repo: "hexpm", optional: true]}, {:hackney, "~> 1.16", [hex: :hackney, repo: "hexpm", optional: true]}, {:jason, "~> 1.1", [hex: :jason, repo: "hexpm", optional: true]}, {:jsx, "~> 2.8 or ~> 3.0", [hex: :jsx, repo: "hexpm", optional: true]}, {:mime, "~> 1.2 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:sweet_xml, "~> 0.7", [hex: :sweet_xml, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "e82bd0091bb9a5bb190139599f922ff3fc7aebcca4374d65c99c4e23aa6d1625"},
"ex_aws_s3": {:hex, :ex_aws_s3, "2.5.3", "422468e5c3e1a4da5298e66c3468b465cfd354b842e512cb1f6fbbe4e2f5bdaf", [:mix], [{:ex_aws, "~> 2.0", [hex: :ex_aws, repo: "hexpm", optional: false]}, {:sweet_xml, ">= 0.0.0", [hex: :sweet_xml, repo: "hexpm", optional: true]}], "hexpm", "4f09dd372cc386550e484808c5ac5027766c8d0cd8271ccc578b82ee6ef4f3b8"},
"ex_const": {:hex, :ex_const, "0.2.4", "d06e540c9d834865b012a17407761455efa71d0ce91e5831e86881b9c9d82448", [:mix], [], "hexpm", "96fd346610cc992b8f896ed26a98be82ac4efb065a0578f334a32d60a3ba9767"},
"ex_doc": {:hex, :ex_doc, "0.32.0", "896afb57b1e00030f6ec8b2e19d3ca99a197afb23858d49d94aea673dc222f12", [:mix], [{:earmark_parser, "~> 1.4.39", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.1", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1", [hex: :makeup_erlang, repo: "hexpm", optional: false]}], "hexpm", "ed2c3e42c558f49bda3ff37e05713432006e1719a6c4a3320c7e4735787374e7"},
"ex_const": {:hex, :ex_const, "0.3.0", "9d79516679991baf540ef445438eef1455ca91cf1a3c2680d8fb9e5bea2fe4de", [:mix], [], "hexpm", "76546322abb9e40ee4a2f454cf1c8a5b25c3672fa79bed1ea52c31e0d2428ca9"},
"ex_doc": {:hex, :ex_doc, "0.34.0", "ab95e0775db3df71d30cf8d78728dd9261c355c81382bcd4cefdc74610bef13e", [:mix], [{:earmark_parser, "~> 1.4.39", [hex: :earmark_parser, repo: "hexpm", optional: false]}, {:makeup_c, ">= 0.1.0", [hex: :makeup_c, repo: "hexpm", optional: true]}, {:makeup_elixir, "~> 0.14 or ~> 1.0", [hex: :makeup_elixir, repo: "hexpm", optional: false]}, {:makeup_erlang, "~> 0.1 or ~> 1.0", [hex: :makeup_erlang, repo: "hexpm", optional: false]}, {:makeup_html, ">= 0.1.0", [hex: :makeup_html, repo: "hexpm", optional: true]}], "hexpm", "60734fb4c1353f270c3286df4a0d51e65a2c1d9fba66af3940847cc65a8066d7"},
"ex_machina": {:hex, :ex_machina, "2.7.0", "b792cc3127fd0680fecdb6299235b4727a4944a09ff0fa904cc639272cd92dc7", [:mix], [{:ecto, "~> 2.2 or ~> 3.0", [hex: :ecto, repo: "hexpm", optional: true]}, {:ecto_sql, "~> 3.0", [hex: :ecto_sql, repo: "hexpm", optional: true]}], "hexpm", "419aa7a39bde11894c87a615c4ecaa52d8f107bbdd81d810465186f783245bf8"},
"ex_syslogger": {:hex, :ex_syslogger, "2.0.0", "de6de5c5472a9c4fdafb28fa6610e381ae79ebc17da6490b81d785d68bd124c9", [:mix], [{:jason, "~> 1.2", [hex: :jason, repo: "hexpm", optional: true]}, {:syslog, "~> 1.1.0", [hex: :syslog, repo: "hexpm", optional: false]}], "hexpm", "a52b2fe71764e9e6ecd149ab66635812f68e39279cbeee27c52c0e35e8b8019e"},
"excoveralls": {:hex, :excoveralls, "0.16.1", "0bd42ed05c7d2f4d180331a20113ec537be509da31fed5c8f7047ce59ee5a7c5", [:mix], [{:hackney, "~> 1.16", [hex: :hackney, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm", "dae763468e2008cf7075a64cb1249c97cb4bc71e236c5c2b5e5cdf1cfa2bf138"},
@ -50,54 +49,53 @@
"fast_sanitize": {:hex, :fast_sanitize, "0.2.3", "67b93dfb34e302bef49fec3aaab74951e0f0602fd9fa99085987af05bd91c7a5", [:mix], [{:fast_html, "~> 2.0", [hex: :fast_html, repo: "hexpm", optional: false]}, {:plug, "~> 1.8", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "e8ad286d10d0386e15d67d0ee125245ebcfbc7d7290b08712ba9013c8c5e56e2"},
"file_ex": {:git, "https://akkoma.dev/AkkomaGang/file_ex.git", "cc7067c7d446c2526e9ecf91d40896b088851569", [ref: "cc7067c7d446c2526e9ecf91d40896b088851569"]},
"file_system": {:hex, :file_system, "1.0.0", "b689cc7dcee665f774de94b5a832e578bd7963c8e637ef940cd44327db7de2cd", [:mix], [], "hexpm", "6752092d66aec5a10e662aefeed8ddb9531d79db0bc145bb8c40325ca1d8536d"},
"finch": {:hex, :finch, "0.16.0", "40733f02c89f94a112518071c0a91fe86069560f5dbdb39f9150042f44dcfb1a", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: false]}, {:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.3", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 0.2.6 or ~> 1.0", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "f660174c4d519e5fec629016054d60edd822cdfe2b7270836739ac2f97735ec5"},
"finch": {:hex, :finch, "0.18.0", "944ac7d34d0bd2ac8998f79f7a811b21d87d911e77a786bc5810adb75632ada4", [:mix], [{:castore, "~> 0.1 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: false]}, {:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.3", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 0.2.6 or ~> 1.0", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "69f5045b042e531e53edc2574f15e25e735b522c37e2ddb766e15b979e03aa65"},
"flake_id": {:hex, :flake_id, "0.1.0", "7716b086d2e405d09b647121a166498a0d93d1a623bead243e1f74216079ccb3", [:mix], [{:base62, "~> 1.2", [hex: :base62, repo: "hexpm", optional: false]}, {:ecto, ">= 2.0.0", [hex: :ecto, repo: "hexpm", optional: true]}], "hexpm", "31fc8090fde1acd267c07c36ea7365b8604055f897d3a53dd967658c691bd827"},
"floki": {:hex, :floki, "0.36.1", "712b7f2ba19a4d5a47dfe3e74d81876c95bbcbee44fe551f0af3d2a388abb3da", [:mix], [], "hexpm", "21ba57abb8204bcc70c439b423fc0dd9f0286de67dc82773a14b0200ada0995f"},
"floki": {:hex, :floki, "0.36.2", "a7da0193538c93f937714a6704369711998a51a6164a222d710ebd54020aa7a3", [:mix], [], "hexpm", "a8766c0bc92f074e5cb36c4f9961982eda84c5d2b8e979ca67f5c268ec8ed580"},
"gen_smtp": {:hex, :gen_smtp, "1.2.0", "9cfc75c72a8821588b9b9fe947ae5ab2aed95a052b81237e0928633a13276fd3", [:rebar3], [{:ranch, ">= 1.8.0", [hex: :ranch, repo: "hexpm", optional: false]}], "hexpm", "5ee0375680bca8f20c4d85f58c2894441443a743355430ff33a783fe03296779"},
"gettext": {:hex, :gettext, "0.22.3", "c8273e78db4a0bb6fba7e9f0fd881112f349a3117f7f7c598fa18c66c888e524", [:mix], [{:expo, "~> 0.4.0", [hex: :expo, repo: "hexpm", optional: false]}], "hexpm", "935f23447713954a6866f1bb28c3a878c4c011e802bcd68a726f5e558e4b64bd"},
"hackney": {:hex, :hackney, "1.20.1", "8d97aec62ddddd757d128bfd1df6c5861093419f8f7a4223823537bad5d064e2", [:rebar3], [{:certifi, "~> 2.12.0", [hex: :certifi, repo: "hexpm", optional: false]}, {:idna, "~> 6.1.0", [hex: :idna, repo: "hexpm", optional: false]}, {:metrics, "~> 1.0.0", [hex: :metrics, repo: "hexpm", optional: false]}, {:mimerl, "~> 1.1", [hex: :mimerl, repo: "hexpm", optional: false]}, {:parse_trans, "3.4.1", [hex: :parse_trans, repo: "hexpm", optional: false]}, {:ssl_verify_fun, "~> 1.1.0", [hex: :ssl_verify_fun, repo: "hexpm", optional: false]}, {:unicode_util_compat, "~> 0.7.0", [hex: :unicode_util_compat, repo: "hexpm", optional: false]}], "hexpm", "fe9094e5f1a2a2c0a7d10918fee36bfec0ec2a979994cff8cfe8058cd9af38e3"},
"heroicons": {:git, "https://github.com/tailwindlabs/heroicons.git", "88ab3a0d790e6a47404cba02800a6b25d2afae50", [tag: "v2.1.1", sparse: "optimized"]},
"hpax": {:hex, :hpax, "0.1.2", "09a75600d9d8bbd064cdd741f21fc06fc1f4cf3d0fcc335e5aa19be1a7235c84", [:mix], [], "hexpm", "2c87843d5a23f5f16748ebe77969880e29809580efdaccd615cd3bed628a8c13"},
"html_entities": {:hex, :html_entities, "0.5.2", "9e47e70598da7de2a9ff6af8758399251db6dbb7eebe2b013f2bbd2515895c3c", [:mix], [], "hexpm", "c53ba390403485615623b9531e97696f076ed415e8d8058b1dbaa28181f4fdcc"},
"http_signatures": {:git, "https://akkoma.dev/AkkomaGang/http_signatures.git", "6640ce7d24c783ac2ef56e27d00d12e8dc85f396", [ref: "6640ce7d24c783ac2ef56e27d00d12e8dc85f396"]},
"http_signatures": {:git, "https://akkoma.dev/AkkomaGang/http_signatures.git", "d44c43d66758c6a73eaa4da9cffdbee0c5da44ae", [ref: "d44c43d66758c6a73eaa4da9cffdbee0c5da44ae"]},
"httpoison": {:hex, :httpoison, "1.8.2", "9eb9c63ae289296a544842ef816a85d881d4a31f518a0fec089aaa744beae290", [:mix], [{:hackney, "~> 1.17", [hex: :hackney, repo: "hexpm", optional: false]}], "hexpm", "2bb350d26972e30c96e2ca74a1aaf8293d61d0742ff17f01e0279fef11599921"},
"idna": {:hex, :idna, "6.1.1", "8a63070e9f7d0c62eb9d9fcb360a7de382448200fbbd1b106cc96d3d8099df8d", [:rebar3], [{:unicode_util_compat, "~> 0.7.0", [hex: :unicode_util_compat, repo: "hexpm", optional: false]}], "hexpm", "92376eb7894412ed19ac475e4a86f7b413c1b9fbb5bd16dccd57934157944cea"},
"inet_cidr": {:hex, :inet_cidr, "1.0.8", "d26bb7bdbdf21ae401ead2092bf2bb4bf57fe44a62f5eaa5025280720ace8a40", [:mix], [], "hexpm", "d5b26da66603bb56c933c65214c72152f0de9a6ea53618b56d63302a68f6a90e"},
"jason": {:hex, :jason, "1.4.1", "af1504e35f629ddcdd6addb3513c3853991f694921b1b9368b0bd32beb9f1b63", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "fbb01ecdfd565b56261302f7e1fcc27c4fb8f32d56eab74db621fc154604a7a1"},
"joken": {:hex, :joken, "2.6.1", "2ca3d8d7f83bf7196296a3d9b2ecda421a404634bfc618159981a960020480a1", [:mix], [{:jose, "~> 1.11.9", [hex: :jose, repo: "hexpm", optional: false]}], "hexpm", "ab26122c400b3d254ce7d86ed066d6afad27e70416df947cdcb01e13a7382e68"},
"jose": {:hex, :jose, "1.11.9", "c861eb99d9e9f62acd071dc5a49ffbeab9014e44490cd85ea3e49e3d36184777", [:mix, :rebar3], [], "hexpm", "b5ccc3749d2e1638c26bed806259df5bc9e438797fe60dc71e9fa0716133899b"},
"jose": {:hex, :jose, "1.11.10", "a903f5227417bd2a08c8a00a0cbcc458118be84480955e8d251297a425723f83", [:mix, :rebar3], [], "hexpm", "0d6cd36ff8ba174db29148fc112b5842186b68a90ce9fc2b3ec3afe76593e614"},
"jumper": {:hex, :jumper, "1.0.2", "68cdcd84472a00ac596b4e6459a41b3062d4427cbd4f1e8c8793c5b54f1406a7", [:mix], [], "hexpm", "9b7782409021e01ab3c08270e26f36eb62976a38c1aa64b2eaf6348422f165e1"},
"linkify": {:git, "https://akkoma.dev/AkkomaGang/linkify.git", "2567e2c1073fa371fd26fd66dfa5bc77b6919c16", []},
"linkify": {:hex, :linkify, "0.5.3", "5f8143d8f61f5ff08d3aeeff47ef6509492b4948d8f08007fbf66e4d2246a7f2", [:mix], [], "hexpm", "3ef35a1377d47c25506e07c1c005ea9d38d700699d92ee92825f024434258177"},
"mail": {:hex, :mail, "0.3.1", "cb0a14e4ed8904e4e5a08214e686ccf6f9099346885db17d8c309381f865cc5c", [:mix], [], "hexpm", "1db701e89865c1d5fa296b2b57b1cd587587cca8d8a1a22892b35ef5a8e352a6"},
"majic": {:git, "https://akkoma.dev/AkkomaGang/majic.git", "80540b36939ec83f48e76c61e5000e0fd67706f0", [ref: "80540b36939ec83f48e76c61e5000e0fd67706f0"]},
"makeup": {:hex, :makeup, "1.1.1", "fa0bc768698053b2b3869fa8a62616501ff9d11a562f3ce39580d60860c3a55e", [:mix], [{:nimble_parsec, "~> 1.2.2 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "5dc62fbdd0de44de194898b6710692490be74baa02d9d108bc29f007783b0b48"},
"makeup": {:hex, :makeup, "1.1.2", "9ba8837913bdf757787e71c1581c21f9d2455f4dd04cfca785c70bbfff1a76a3", [:mix], [{:nimble_parsec, "~> 1.2.2 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "cce1566b81fbcbd21eca8ffe808f33b221f9eee2cbc7a1706fc3da9ff18e6cac"},
"makeup_elixir": {:hex, :makeup_elixir, "0.16.2", "627e84b8e8bf22e60a2579dad15067c755531fea049ae26ef1020cad58fe9578", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 1.2.3 or ~> 1.3", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm", "41193978704763f6bbe6cc2758b84909e62984c7752b3784bd3c218bb341706b"},
"makeup_erlang": {:hex, :makeup_erlang, "0.1.5", "e0ff5a7c708dda34311f7522a8758e23bfcd7d8d8068dc312b5eb41c6fd76eba", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "94d2e986428585a21516d7d7149781480013c56e30c6a233534bedf38867a59a"},
"makeup_erlang": {:hex, :makeup_erlang, "1.0.0", "6f0eff9c9c489f26b69b61440bf1b238d95badae49adac77973cbacae87e3c2e", [:mix], [{:makeup, "~> 1.0", [hex: :makeup, repo: "hexpm", optional: false]}], "hexpm", "ea7a9307de9d1548d2a72d299058d1fd2339e3d398560a0e46c27dab4891e4d2"},
"meck": {:hex, :meck, "0.9.2", "85ccbab053f1db86c7ca240e9fc718170ee5bda03810a6292b5306bf31bae5f5", [:rebar3], [], "hexpm", "81344f561357dc40a8344afa53767c32669153355b626ea9fcbc8da6b3045826"},
"metrics": {:hex, :metrics, "1.0.1", "25f094dea2cda98213cecc3aeff09e940299d950904393b2a29d191c346a8486", [:rebar3], [], "hexpm", "69b09adddc4f74a40716ae54d140f93beb0fb8978d8636eaded0c31b6f099f16"},
"mfm_parser": {:git, "https://akkoma.dev/AkkomaGang/mfm-parser.git", "b21ab7754024af096f2d14247574f55f0063295b", [ref: "b21ab7754024af096f2d14247574f55f0063295b"]},
"mime": {:hex, :mime, "2.0.5", "dc34c8efd439abe6ae0343edbb8556f4d63f178594894720607772a041b04b02", [:mix], [], "hexpm", "da0d64a365c45bc9935cc5c8a7fc5e49a0e0f9932a761c55d6c52b142780a05c"},
"mimerl": {:hex, :mimerl, "1.2.0", "67e2d3f571088d5cfd3e550c383094b47159f3eee8ffa08e64106cdf5e981be3", [:rebar3], [], "hexpm", "f278585650aa581986264638ebf698f8bb19df297f66ad91b18910dfc6e19323"},
"mimerl": {:hex, :mimerl, "1.3.0", "d0cd9fc04b9061f82490f6581e0128379830e78535e017f7780f37fea7545726", [:rebar3], [], "hexpm", "a1e15a50d1887217de95f0b9b0793e32853f7c258a5cd227650889b38839fe9d"},
"mint": {:hex, :mint, "1.5.2", "4805e059f96028948870d23d7783613b7e6b0e2fb4e98d720383852a760067fd", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "d77d9e9ce4eb35941907f1d3df38d8f750c357865353e21d335bdcdf6d892a02"},
"mock": {:hex, :mock, "0.3.8", "7046a306b71db2488ef54395eeb74df0a7f335a7caca4a3d3875d1fc81c884dd", [:mix], [{:meck, "~> 0.9.2", [hex: :meck, repo: "hexpm", optional: false]}], "hexpm", "7fa82364c97617d79bb7d15571193fc0c4fe5afd0c932cef09426b3ee6fe2022"},
"mogrify": {:hex, :mogrify, "0.9.3", "238c782f00271dace01369ad35ae2e9dd020feee3443b9299ea5ea6bed559841", [:mix], [], "hexpm", "0189b1e1de27455f2b9ae8cf88239cefd23d38de9276eb5add7159aea51731e6"},
"mox": {:hex, :mox, "1.1.0", "0f5e399649ce9ab7602f72e718305c0f9cdc351190f72844599545e4996af73c", [:mix], [], "hexpm", "d44474c50be02d5b72131070281a5d3895c0e7a95c780e90bc0cfe712f633a13"},
"nimble_options": {:hex, :nimble_options, "1.1.0", "3b31a57ede9cb1502071fade751ab0c7b8dbe75a9a4c2b5bbb0943a690b63172", [:mix], [], "hexpm", "8bbbb3941af3ca9acc7835f5655ea062111c9c27bcac53e004460dfd19008a99"},
"nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"},
"nimble_parsec": {:hex, :nimble_parsec, "1.4.0", "51f9b613ea62cfa97b25ccc2c1b4216e81df970acd8e16e8d1bdc58fef21370d", [:mix], [], "hexpm", "9c565862810fb383e9838c1dd2d7d2c437b3d13b267414ba6af33e50d2d1cf28"},
"nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"},
"oban": {:hex, :oban, "2.17.8", "7fd7c8e82c7819afc1b5b5ed8d6d92bf0ecdd7ba170328fb043301eb06d32521", [:mix], [{:ecto_sql, "~> 3.10", [hex: :ecto_sql, repo: "hexpm", optional: false]}, {:ecto_sqlite3, "~> 0.9", [hex: :ecto_sqlite3, repo: "hexpm", optional: true]}, {:jason, "~> 1.1", [hex: :jason, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.16", [hex: :postgrex, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "a2165bf93843b7bcb68182c82725ddd4cb43c0c3719f114e7aa3b6c99c4b6129"},
"open_api_spex": {:hex, :open_api_spex, "3.18.3", "fefb84fe323cacfc92afdd0ecb9e89bc0261ae00b7e3167ffc2028ce3944de42", [:mix], [{:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:plug, "~> 1.7", [hex: :plug, repo: "hexpm", optional: false]}, {:poison, "~> 3.0 or ~> 4.0 or ~> 5.0", [hex: :poison, repo: "hexpm", optional: true]}, {:ymlr, "~> 2.0 or ~> 3.0 or ~> 4.0 or ~> 5.0", [hex: :ymlr, repo: "hexpm", optional: true]}], "hexpm", "c0cfc31570199ce7e7520b494a591027da609af45f6bf9adce51e2469b1609fb"},
"oban": {:hex, :oban, "2.17.10", "c3e5bd739b5c3fdc38eba1d43ab270a8c6ca4463bb779b7705c69400b0d87678", [:mix], [{:ecto_sql, "~> 3.10", [hex: :ecto_sql, repo: "hexpm", optional: false]}, {:ecto_sqlite3, "~> 0.9", [hex: :ecto_sqlite3, repo: "hexpm", optional: true]}, {:jason, "~> 1.1", [hex: :jason, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.16", [hex: :postgrex, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "4afd027b8e2bc3c399b54318b4f46ee8c40251fb55a285cb4e38b5363f0ee7c4"},
"open_api_spex": {:hex, :open_api_spex, "3.19.1", "65ccb5d06e3d664d1eec7c5ea2af2289bd2f37897094a74d7219fb03fc2b5994", [:mix], [{:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:plug, "~> 1.7", [hex: :plug, repo: "hexpm", optional: false]}, {:poison, "~> 3.0 or ~> 4.0 or ~> 5.0", [hex: :poison, repo: "hexpm", optional: true]}, {:ymlr, "~> 2.0 or ~> 3.0 or ~> 4.0 or ~> 5.0", [hex: :ymlr, repo: "hexpm", optional: true]}], "hexpm", "392895827ce2984a3459c91a484e70708132d8c2c6c5363972b4b91d6bbac3dd"},
"parse_trans": {:hex, :parse_trans, "3.4.1", "6e6aa8167cb44cc8f39441d05193be6e6f4e7c2946cb2759f015f8c56b76e5ff", [:rebar3], [], "hexpm", "620a406ce75dada827b82e453c19cf06776be266f5a67cff34e1ef2cbb60e49a"},
"phoenix": {:hex, :phoenix, "1.7.12", "1cc589e0eab99f593a8aa38ec45f15d25297dd6187ee801c8de8947090b5a9d3", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix_pubsub, "~> 2.1", [hex: :phoenix_pubsub, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}, {:plug_cowboy, "~> 2.7", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:plug_crypto, "~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}, {:websock_adapter, "~> 0.5.3", [hex: :websock_adapter, repo: "hexpm", optional: false]}], "hexpm", "d646192fbade9f485b01bc9920c139bfdd19d0f8df3d73fd8eaf2dfbe0d2837c"},
"phoenix_ecto": {:hex, :phoenix_ecto, "4.5.1", "6fdbc334ea53620e71655664df6f33f670747b3a7a6c4041cdda3e2c32df6257", [:mix], [{:ecto, "~> 3.5", [hex: :ecto, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.1", [hex: :phoenix_html, repo: "hexpm", optional: true]}, {:plug, "~> 1.9", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "ebe43aa580db129e54408e719fb9659b7f9e0d52b965c5be26cdca416ecead28"},
"phoenix_html": {:hex, :phoenix_html, "3.3.3", "380b8fb45912b5638d2f1d925a3771b4516b9a78587249cabe394e0a5d579dc9", [:mix], [{:plug, "~> 1.5", [hex: :plug, repo: "hexpm", optional: true]}], "hexpm", "923ebe6fec6e2e3b3e569dfbdc6560de932cd54b000ada0208b5f45024bdd76c"},
"phoenix_live_dashboard": {:hex, :phoenix_live_dashboard, "0.8.3", "7ff51c9b6609470f681fbea20578dede0e548302b0c8bdf338b5a753a4f045bf", [:mix], [{:ecto, "~> 3.6.2 or ~> 3.7", [hex: :ecto, repo: "hexpm", optional: true]}, {:ecto_mysql_extras, "~> 0.5", [hex: :ecto_mysql_extras, repo: "hexpm", optional: true]}, {:ecto_psql_extras, "~> 0.7", [hex: :ecto_psql_extras, repo: "hexpm", optional: true]}, {:ecto_sqlite3_extras, "~> 1.1.7 or ~> 1.2.0", [hex: :ecto_sqlite3_extras, repo: "hexpm", optional: true]}, {:mime, "~> 1.6 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:phoenix_live_view, "~> 0.19 or ~> 1.0", [hex: :phoenix_live_view, repo: "hexpm", optional: false]}, {:telemetry_metrics, "~> 0.6 or ~> 1.0", [hex: :telemetry_metrics, repo: "hexpm", optional: false]}], "hexpm", "f9470a0a8bae4f56430a23d42f977b5a6205fdba6559d76f932b876bfaec652d"},
"phoenix_live_view": {:hex, :phoenix_live_view, "0.20.14", "70fa101aa0539e81bed4238777498f6215e9dda3461bdaa067cad6908110c364", [:mix], [{:floki, "~> 0.36", [hex: :floki, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix, "~> 1.6.15 or ~> 1.7.0", [hex: :phoenix, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 3.3 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:plug, "~> 1.15", [hex: :plug, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "82f6d006c5264f979ed5eb75593d808bbe39020f20df2e78426f4f2d570e2402"},
"phoenix_ecto": {:hex, :phoenix_ecto, "4.6.1", "96798325fab2fed5a824ca204e877b81f9afd2e480f581e81f7b4b64a5a477f2", [:mix], [{:ecto, "~> 3.5", [hex: :ecto, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.1", [hex: :phoenix_html, repo: "hexpm", optional: true]}, {:plug, "~> 1.9", [hex: :plug, repo: "hexpm", optional: false]}, {:postgrex, "~> 0.17", [hex: :postgrex, repo: "hexpm", optional: true]}], "hexpm", "0ae544ff99f3c482b0807c5cec2c8289e810ecacabc04959d82c3337f4703391"},
"phoenix_html": {:hex, :phoenix_html, "3.3.4", "42a09fc443bbc1da37e372a5c8e6755d046f22b9b11343bf885067357da21cb3", [:mix], [{:plug, "~> 1.5", [hex: :plug, repo: "hexpm", optional: true]}], "hexpm", "0249d3abec3714aff3415e7ee3d9786cb325be3151e6c4b3021502c585bf53fb"},
"phoenix_live_dashboard": {:hex, :phoenix_live_dashboard, "0.7.2", "97cc4ff2dba1ebe504db72cb45098cb8e91f11160528b980bd282cc45c73b29c", [:mix], [{:ecto, "~> 3.6.2 or ~> 3.7", [hex: :ecto, repo: "hexpm", optional: true]}, {:ecto_mysql_extras, "~> 0.5", [hex: :ecto_mysql_extras, repo: "hexpm", optional: true]}, {:ecto_psql_extras, "~> 0.7", [hex: :ecto_psql_extras, repo: "hexpm", optional: true]}, {:mime, "~> 1.6 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:phoenix_live_view, "~> 0.18.3", [hex: :phoenix_live_view, repo: "hexpm", optional: false]}, {:telemetry_metrics, "~> 0.6 or ~> 1.0", [hex: :telemetry_metrics, repo: "hexpm", optional: false]}], "hexpm", "0e5fdf063c7a3b620c566a30fcf68b7ee02e5e46fe48ee46a6ec3ba382dc05b7"},
"phoenix_live_view": {:hex, :phoenix_live_view, "0.18.18", "1f38fbd7c363723f19aad1a04b5490ff3a178e37daaf6999594d5f34796c47fc", [:mix], [{:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: true]}, {:phoenix, "~> 1.6.15 or ~> 1.7.0", [hex: :phoenix, repo: "hexpm", optional: false]}, {:phoenix_html, "~> 3.3", [hex: :phoenix_html, repo: "hexpm", optional: false]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}, {:phoenix_view, "~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "a5810d0472f3189ede6d2a95bda7f31c6113156b91784a3426cb0ab6a6d85214"},
"phoenix_pubsub": {:hex, :phoenix_pubsub, "2.1.3", "3168d78ba41835aecad272d5e8cd51aa87a7ac9eb836eabc42f6e57538e3731d", [:mix], [], "hexpm", "bba06bc1dcfd8cb086759f0edc94a8ba2bc8896d5331a1e2c2902bf8e36ee502"},
"phoenix_swoosh": {:hex, :phoenix_swoosh, "1.2.1", "b74ccaa8046fbc388a62134360ee7d9742d5a8ae74063f34eb050279de7a99e1", [:mix], [{:finch, "~> 0.8", [hex: :finch, repo: "hexpm", optional: true]}, {:hackney, "~> 1.10", [hex: :hackney, repo: "hexpm", optional: true]}, {:phoenix, "~> 1.6", [hex: :phoenix, repo: "hexpm", optional: true]}, {:phoenix_html, "~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}, {:phoenix_view, "~> 1.0 or ~> 2.0", [hex: :phoenix_view, repo: "hexpm", optional: false]}, {:swoosh, "~> 1.5", [hex: :swoosh, repo: "hexpm", optional: false]}], "hexpm", "4000eeba3f9d7d1a6bf56d2bd56733d5cadf41a7f0d8ffe5bb67e7d667e204a2"},
"phoenix_template": {:hex, :phoenix_template, "1.0.4", "e2092c132f3b5e5b2d49c96695342eb36d0ed514c5b252a77048d5969330d639", [:mix], [{:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}], "hexpm", "2c0c81f0e5c6753faf5cca2f229c9709919aba34fab866d3bc05060c9c444206"},
"phoenix_view": {:hex, :phoenix_view, "2.0.3", "4d32c4817fce933693741deeb99ef1392619f942633dde834a5163124813aad3", [:mix], [{:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}], "hexpm", "cd34049af41be2c627df99cd4eaa71fc52a328c0c3d8e7d4aa28f880c30e7f64"},
"plug": {:hex, :plug, "1.15.3", "712976f504418f6dff0a3e554c40d705a9bcf89a7ccef92fc6a5ef8f16a30a97", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "cc4365a3c010a56af402e0809208873d113e9c38c401cabd88027ef4f5c01fd2"},
"phoenix_view": {:hex, :phoenix_view, "2.0.4", "b45c9d9cf15b3a1af5fb555c674b525391b6a1fe975f040fb4d913397b31abf4", [:mix], [{:phoenix_html, "~> 2.14.2 or ~> 3.0 or ~> 4.0", [hex: :phoenix_html, repo: "hexpm", optional: true]}, {:phoenix_template, "~> 1.0", [hex: :phoenix_template, repo: "hexpm", optional: false]}], "hexpm", "4e992022ce14f31fe57335db27a28154afcc94e9983266835bb3040243eb620b"},
"plug": {:hex, :plug, "1.16.0", "1d07d50cb9bb05097fdf187b31cf087c7297aafc3fed8299aac79c128a707e47", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug_crypto, "~> 1.1.1 or ~> 1.2 or ~> 2.0", [hex: :plug_crypto, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4.3 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "cbf53aa1f5c4d758a7559c0bd6d59e286c2be0c6a1fac8cc3eee2f638243b93e"},
"plug_cowboy": {:hex, :plug_cowboy, "2.7.1", "87677ffe3b765bc96a89be7960f81703223fe2e21efa42c125fcd0127dd9d6b2", [:mix], [{:cowboy, "~> 2.7", [hex: :cowboy, repo: "hexpm", optional: false]}, {:cowboy_telemetry, "~> 0.3", [hex: :cowboy_telemetry, repo: "hexpm", optional: false]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "02dbd5f9ab571b864ae39418db7811618506256f6d13b4a45037e5fe78dc5de3"},
"plug_crypto": {:hex, :plug_crypto, "2.1.0", "f44309c2b06d249c27c8d3f65cfe08158ade08418cf540fd4f72d4d6863abb7b", [:mix], [], "hexpm", "131216a4b030b8f8ce0f26038bc4421ae60e4bb95c5cf5395e1421437824c4fa"},
"plug_static_index_html": {:hex, :plug_static_index_html, "1.0.0", "840123d4d3975585133485ea86af73cb2600afd7f2a976f9f5fd8b3808e636a0", [:mix], [{:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "79fd4fcf34d110605c26560cbae8f23c603ec4158c08298bd4360fdea90bb5cf"},
@ -109,14 +107,13 @@
"recon": {:hex, :recon, "2.5.5", "c108a4c406fa301a529151a3bb53158cadc4064ec0c5f99b03ddb8c0e4281bdf", [:mix, :rebar3], [], "hexpm", "632a6f447df7ccc1a4a10bdcfce71514412b16660fe59deca0fcf0aa3c054404"},
"remote_ip": {:hex, :remote_ip, "1.1.0", "cb308841595d15df3f9073b7c39243a1dd6ca56e5020295cb012c76fbec50f2d", [:mix], [{:combine, "~> 0.10", [hex: :combine, repo: "hexpm", optional: false]}, {:plug, "~> 1.14", [hex: :plug, repo: "hexpm", optional: false]}], "hexpm", "616ffdf66aaad6a72fc546dabf42eed87e2a99e97b09cbd92b10cc180d02ed74"},
"search_parser": {:git, "https://github.com/FloatingGhost/pleroma-contrib-search-parser.git", "08971a81e68686f9ac465cfb6661d51c5e4e1e7f", [ref: "08971a81e68686f9ac465cfb6661d51c5e4e1e7f"]},
"sleeplocks": {:hex, :sleeplocks, "1.1.2", "d45aa1c5513da48c888715e3381211c859af34bee9b8290490e10c90bb6ff0ca", [:rebar3], [], "hexpm", "9fe5d048c5b781d6305c1a3a0f40bb3dfc06f49bf40571f3d2d0c57eaa7f59a5"},
"sleeplocks": {:hex, :sleeplocks, "1.1.3", "96a86460cc33b435c7310dbd27ec82ca2c1f24ae38e34f8edde97f756503441a", [:rebar3], [], "hexpm", "d3b3958552e6eb16f463921e70ae7c767519ef8f5be46d7696cc1ed649421321"},
"ssl_verify_fun": {:hex, :ssl_verify_fun, "1.1.7", "354c321cf377240c7b8716899e182ce4890c5938111a1296add3ec74cf1715df", [:make, :mix, :rebar3], [], "hexpm", "fe4c190e8f37401d30167c8c405eda19469f34577987c76dde613e838bbc67f8"},
"statistex": {:hex, :statistex, "1.0.0", "f3dc93f3c0c6c92e5f291704cf62b99b553253d7969e9a5fa713e5481cd858a5", [:mix], [], "hexpm", "ff9d8bee7035028ab4742ff52fc80a2aa35cece833cf5319009b52f1b5a86c27"},
"sweet_xml": {:hex, :sweet_xml, "0.7.4", "a8b7e1ce7ecd775c7e8a65d501bc2cd933bff3a9c41ab763f5105688ef485d08", [:mix], [], "hexpm", "e7c4b0bdbf460c928234951def54fe87edf1a170f6896675443279e2dbeba167"},
"swoosh": {:hex, :swoosh, "1.14.4", "94e9dba91f7695a10f49b0172c4a4cb658ef24abef7e8140394521b7f3bbb2d4", [:mix], [{:cowboy, "~> 1.1 or ~> 2.4", [hex: :cowboy, repo: "hexpm", optional: true]}, {:ex_aws, "~> 2.1", [hex: :ex_aws, repo: "hexpm", optional: true]}, {:finch, "~> 0.6", [hex: :finch, repo: "hexpm", optional: true]}, {:gen_smtp, "~> 0.13 or ~> 1.0", [hex: :gen_smtp, repo: "hexpm", optional: true]}, {:hackney, "~> 1.9", [hex: :hackney, repo: "hexpm", optional: true]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:mail, "~> 0.2", [hex: :mail, repo: "hexpm", optional: true]}, {:mime, "~> 1.1 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:plug, "~> 1.9", [hex: :plug, repo: "hexpm", optional: true]}, {:plug_cowboy, ">= 1.0.0", [hex: :plug_cowboy, repo: "hexpm", optional: true]}, {:req, "~> 0.4 or ~> 1.0", [hex: :req, repo: "hexpm", optional: true]}, {:telemetry, "~> 0.4.2 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "081c5a590e4ba85cc89baddf7b2beecf6c13f7f84a958f1cd969290815f0f026"},
"syslog": {:hex, :syslog, "1.1.0", "6419a232bea84f07b56dc575225007ffe34d9fdc91abe6f1b2f254fd71d8efc2", [:rebar3], [], "hexpm", "4c6a41373c7e20587be33ef841d3de6f3beba08519809329ecc4d27b15b659e1"},
"table_rex": {:hex, :table_rex, "4.0.0", "3c613a68ebdc6d4d1e731bc973c233500974ec3993c99fcdabb210407b90959b", [:mix], [], "hexpm", "c35c4d5612ca49ebb0344ea10387da4d2afe278387d4019e4d8111e815df8f55"},
"tailwind": {:hex, :tailwind, "0.2.2", "9e27288b568ede1d88517e8c61259bc214a12d7eed271e102db4c93fcca9b2cd", [:mix], [{:castore, ">= 0.0.0", [hex: :castore, repo: "hexpm", optional: false]}], "hexpm", "ccfb5025179ea307f7f899d1bb3905cd0ac9f687ed77feebc8f67bdca78565c4"},
"telemetry": {:hex, :telemetry, "1.2.1", "68fdfe8d8f05a8428483a97d7aab2f268aaff24b49e0f599faa091f1d4e7f61c", [:rebar3], [], "hexpm", "dad9ce9d8effc621708f99eac538ef1cbe05d6a874dd741de2e689c47feafed5"},
"telemetry_metrics": {:hex, :telemetry_metrics, "0.6.2", "2caabe9344ec17eafe5403304771c3539f3b6e2f7fb6a6f602558c825d0d0bfb", [:mix], [{:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "9b43db0dc33863930b9ef9d27137e78974756f5f198cae18409970ed6fa5b561"},
"telemetry_metrics_prometheus": {:hex, :telemetry_metrics_prometheus, "1.1.0", "1cc23e932c1ef9aa3b91db257ead31ea58d53229d407e059b29bb962c1505a13", [:mix], [{:plug_cowboy, "~> 2.1", [hex: :plug_cowboy, repo: "hexpm", optional: false]}, {:telemetry_metrics_prometheus_core, "~> 1.0", [hex: :telemetry_metrics_prometheus_core, repo: "hexpm", optional: false]}], "hexpm", "d43b3659b3244da44fe0275b717701542365d4519b79d9ce895b9719c1ce4d26"},

View file

@ -5,8 +5,8 @@ msgstr ""
"POT-Creation-Date: 2022-07-28 09:35+0000\n"
"PO-Revision-Date: 2023-08-04 14:19+0000\n"
"Last-Translator: Anonymous <noreply@weblate.org>\n"
"Language-Team: Catalan <http://translate.akkoma.dev/projects/akkoma/"
"akkoma-backend-config-descriptions/ca/>\n"
"Language-Team: Catalan <http://translate.akkoma.dev/projects/akkoma/akkoma-"
"backend-config-descriptions/ca/>\n"
"Language: ca\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
@ -3296,18 +3296,6 @@ msgstr ""
"If enabled, a name parameter will be added to the URL of the upload. For "
"example `https://instance.tld/media/imagehash.png?name=realname.png`."
#: lib/pleroma/docs/translator.ex:5
#, fuzzy
msgctxt "config description at :pleroma-Pleroma.Upload > :proxy_remote"
msgid ""
"Proxy requests to the remote uploader.\n"
"\n"
"Useful if media upload endpoint is not internet accessible.\n"
msgstr ""
"Proxy requests to the remote uploader.\n"
"\n"
"Useful if media upload endpoint is not internet accessible.\n"
#: lib/pleroma/docs/translator.ex:5
#, fuzzy
msgctxt "config description at :pleroma-Pleroma.Upload > :uploader"
@ -5798,12 +5786,6 @@ msgctxt "config label at :pleroma-Pleroma.Upload > :link_name"
msgid "Link name"
msgstr "Link name"
#: lib/pleroma/docs/translator.ex:5
#, fuzzy
msgctxt "config label at :pleroma-Pleroma.Upload > :proxy_remote"
msgid "Proxy remote"
msgstr "Proxy remote"
#: lib/pleroma/docs/translator.ex:5
#, fuzzy
msgctxt "config label at :pleroma-Pleroma.Upload > :uploader"

Some files were not shown because too many files have changed in this diff Show more