Compare commits

..

422 commits

Author SHA1 Message Date
a38b021427 revert a99eaf06b8
revert Allow to bite users

Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
Signed-off-by: limepotato <limepot@protonmail.ch>
2024-09-02 17:40:41 -06:00
marcin mikołajczak
a99eaf06b8 Allow to bite users
Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
Signed-off-by: limepotato <limepot@protonmail.ch>
2024-09-02 17:37:39 -06:00
floatingghost
3bb31117e6 Merge pull request 'Handle domain mutes on the backend' (#804) from domain-mute-backend-processing into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/804
2024-08-20 10:32:47 +00:00
Floatingghost
2c5c531c35 readd comment about domain mutes 2024-08-20 11:05:36 +01:00
floatingghost
3ff0f46b9f Merge pull request 'Docs: Improve backup restore + fix warnings' (#554) from ilja/akkoma:docs_db_create_in_separate_commands into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/554
2024-06-25 21:33:42 +00:00
floatingghost
4f0cb61782 Merge pull request 'Move prune changelog entries to correct version' (#808) from norm/akkoma:prune-changelog into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/808
2024-06-23 02:20:36 +00:00
floatingghost
5fdb5d69d2 Merge pull request 'Update Caddyfile' (#809) from norm/akkoma:caddyfile-update into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/809
2024-06-23 02:20:24 +00:00
floatingghost
f66135ed08 Merge pull request 'Avoid accumulation of stale data in websockets' (#806) from Oneric/akkoma:websocket_fullsweep into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/806
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
2024-06-23 02:19:36 +00:00
floatingghost
dc34328f15 Merge pull request 'Fix elixir 1.17 and migration lock warnings' (#810) from Oneric/akkoma:ex1.17-warnings into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/810
2024-06-23 02:18:41 +00:00
Oneric
13e2a811ec Avoid accumulation of stale data in websockets
We’ve received reports of some specific instances slowly accumulating
more and more binary data over time up to OOMs and globally setting
ERL_FULLSWEEP_AFTER=0 has proven to be an effective countermeasure.
However, this incurs increased cpu perf costs everywhere and is
thus not suitable to apply out of the box.

Apparently long-lived Phoenix websocket processes are known to
often cause exactly this by getting into a state unfavourable
for the garbage collector.
Therefore it seems likely affected instances are using timeline
streaming and do so in just the right way to trigger this. We
can tune the garbage collector just for websocket processes
and use a more lenient value of 20 to keep the added perf cost
in check.

Testing on one affected instance appears to confirm this theory

Ref.:
  https://www.erlang.org/doc/man/erlang#ghlink-process_flag-2-idp226
  https://blog.guzman.codes/using-phoenix-channels-high-memory-usage-save-money-with-erlfullsweepafter
  https://git.pleroma.social/pleroma/pleroma/-/merge_requests/4060

Tested-by: bjo
2024-06-22 22:22:33 +02:00
Oneric
1a4238bf98 cosmetic: fix concurrent index creation warnings
Since those old migrations will now most likely only run during db init,
there’s not much point in running them in the background concurrently
anyway, so just drop the cncurrent setting rather than disabling
migration locks.
2024-06-19 02:25:23 +02:00
Oneric
c3069b9478 cosmetic: fix elixir 1.17 compiler warnings in main application 2024-06-19 01:49:59 +02:00
Norm
51f09531c4 Disable gzip compression in Caddyfile
Currently Akkoma doesn't have any proper mitigations against BREACH,
which exploits the use of HTTP compression to exfiltrate sensitive data.
(see: https://akkoma.dev/AkkomaGang/akkoma/pulls/721#issuecomment-11487)

To err on the side of caution, disable gzip compression for now until we
can confirm that there's some sort of mitigation in place (whether that
would be Heal-The-Breach on the Caddy side or any Akkoma-side
mitigations).
2024-06-17 23:13:55 -04:00
Norm
962847fdc3 Uncomment media subdomain settings in Caddyfile
Now that a media subdomain is strongly recommended for security reasons,
there is no reason for them to be commented out by default.
2024-06-17 23:12:55 -04:00
Norm
83aab0859a Move prune changelog entries to correct version 2024-06-17 22:41:40 -04:00
Weblate
eb2b0d26e4 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
91870590ec Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
c442877c25 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
16af0bad55 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
16ee6ed500 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
babf5df0e7 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
5767f59294 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
72ce0b7759 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
0cf9b44179 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
3cf335c4d0 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
1556e2be8e Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
629077dce4 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
50256af6f6 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
c5d36d9679 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
fb4c5b97c7 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
a715cf4b3c Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:04 +00:00
Weblate
693a6486da Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:03 +00:00
Weblate
4e353f0335 Update translation files
Updated by "Update PO files to match POT (msgmerge)" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-06-17 21:53:03 +00:00
floatingghost
5992e8bb16 Merge pull request 'Update http-signatures dep, allow created header' (#800) from created-pseudoheader into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/800
2024-06-17 21:52:59 +00:00
Floatingghost
57273754b7 we may as well handle (expires) as well 2024-06-17 22:30:14 +01:00
floatingghost
59bfdf2ca4 Merge pull request 'Add limit CLI flags to prune jobs' (#655) from Oneric/akkoma:prune-batch into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/655
2024-06-17 20:47:53 +00:00
floatingghost
a9e2e31e3b Merge pull request 'Remove proxy_remote vestiges' (#805) from Oneric/akkoma:purge_proxy_remote into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/805
2024-06-17 20:47:11 +00:00
Oneric
bf8f493ffd Remove proxy_remote vestiges
Ever since 364b6969eb
this setting wasn't used by the backend and a noop.
The stated usecase is better served by setting the base_url
to a local subdomain and using proxying in nginx/Caddy/...
2024-06-16 01:21:52 +02:00
Floatingghost
3b197503d2 me me stupid person 2024-06-15 15:30:02 +01:00
Floatingghost
c0b2bba55e revert subdomain change until i can look at why i did that 2024-06-15 15:14:42 +01:00
Floatingghost
4b765b1886 mix format 2024-06-15 15:06:28 +01:00
Floatingghost
cba2c5725f Filter emoji reaction accounts by domain blocks 2024-06-15 15:05:52 +01:00
Floatingghost
2b96c3b224 Update http-signatures dep, allow created header 2024-06-12 18:40:44 +01:00
floatingghost
b03edb4ff4 Merge pull request 'Fix StealEmoji’s max size check' (#793) from Oneric/akkoma:emojistealer_contentlength into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/793
2024-06-12 17:09:05 +00:00
floatingghost
5b75fb2a2f Merge pull request 'pool timeouts/rich media cherry-picks' (#796) from pool-timeouts into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/796
2024-06-12 17:08:06 +00:00
Floatingghost
4d6fb43cbd No need to spawn() any more 2024-06-12 02:09:24 +01:00
Floatingghost
ad52135bf5 Convert rich media backfill to oban task 2024-06-11 18:06:51 +01:00
Floatingghost
28d357f52c add diagnostic script 2024-06-10 15:10:47 +01:00
Floatingghost
9c5feb81aa fix tests 2024-06-09 21:26:29 +01:00
Floatingghost
a360836ce3 fix oembed test 2024-06-09 21:17:12 +01:00
Floatingghost
840c70c4fa remove prints 2024-06-09 18:52:09 +01:00
Floatingghost
c65379afea attempt to fix some tests 2024-06-09 18:45:38 +01:00
Floatingghost
16bed0562d Fix tests 2024-06-09 18:28:00 +01:00
Mark Felder
a801dd7b07 Fix module struct matching 2024-06-09 17:38:28 +01:00
Mark Felder
1e86da43f5 Credo 2024-06-09 17:38:24 +01:00
Mark Felder
411831458c Credo 2024-06-09 17:38:18 +01:00
Mark Felder
56463b2121 Fix compile warning
warning: "else" clauses will never match because all patterns in "with" will always match
  lib/pleroma/web/rich_media/parser/ttl/opengraph.ex:10
2024-06-09 17:38:12 +01:00
Mark Felder
2f5eb79473 Mastodon API: Remove deprecated GET /api/v1/statuses/:id/card endpoint
Removed back in 2019

https://github.com/mastodon/mastodon/pull/11213
2024-06-09 17:38:06 +01:00
Mark Felder
f4daa90bd8 Remove test validating missing descriptions are returned as an empty string 2024-06-09 17:37:59 +01:00
Mark Felder
688748b531 Improve test description 2024-06-09 17:37:32 +01:00
Mark Felder
2e5aa71176 Rich Media Cards are fetched asynchonously and not guaranteed to be available on first post render 2024-06-09 17:37:22 +01:00
Mark Felder
7ca655a999 Rich Media Cards are cached by URL not per status 2024-06-09 17:36:57 +01:00
Mark Felder
4746f98851 Fix broken Rich Media parsing when the image URL is a relative path 2024-06-09 17:36:28 +01:00
Mark Felder
765c7e98d2 Respect the TTL returned in OpenGraph tags 2024-06-09 17:36:15 +01:00
Mark Felder
ddbe989461 Fix broken tests 2024-06-09 17:35:47 +01:00
Floatingghost
4a3dd5f65e lost in cherry-pick 2024-06-09 17:34:41 +01:00
Mark Felder
bfe4152385 Increase the :max_body for Rich Media to 5MB
Websites are increasingly getting more bloated with tricks like inlining content (e.g., CNN.com) which puts pages at or above 5MB. This value may still be too low.
2024-06-09 17:34:29 +01:00
Mark Felder
5da9cbd8a5 RichMedia refactor
Rich Media parsing was previously handled on-demand with a 2 second HTTP request timeout and retained only in Cachex. Every time a Pleroma instance is restarted it will have to request and parse the data for each status with a URL detected. When fetching a batch of statuses they were processed in parallel to attempt to keep the maximum latency at 2 seconds, but often resulted in a timeline appearing to hang during loading due to a URL that could not be successfully reached. URLs which had images links that expire (Amazon AWS) were parsed and inserted with a TTL to ensure the image link would not break.

Rich Media data is now cached in the database and fetched asynchronously. Cachex is used as a read-through cache. When the data becomes available we stream an update to the clients. If the result is returned quickly the experience is almost seamless. Activities were already processed for their Rich Media data during ingestion to warm the cache, so users should not normally encounter the asynchronous loading of the Rich Media data.

Implementation notes:

- The async worker is a Task with a globally unique process name to prevent duplicate processing of the same URL
- The Task will attempt to fetch the data 3 times with increasing sleep time between attempts
- The HTTP request obeys the default HTTP request timeout value instead of 2 seconds
- URLs that cannot be successfully parsed due to an unexpected error receives a negative cache entry for 15 minutes
- URLs that fail with an expected error will receive a negative cache with no TTL
- Activities that have no detected URLs insert a nil value in the Cachex :scrubber_cache so we do not repeat parsing the object content with Floki every time the activity is rendered
- Expiring image URLs are handled with an Oban job
- There is no automatic cleanup of the Rich Media data in the database, but it is safe to delete at any time
- The post draft/preview feature makes the URL processing synchronous so the rendered post preview will have an accurate rendering

Overall performance of timelines and creating new posts which contain URLs is greatly improved.
2024-06-09 17:33:48 +01:00
Floatingghost
a924e117fd Add pool timeouts 2024-06-09 17:20:29 +01:00
floatingghost
d1c4b97613 Merge pull request 'Raise minimum PostgreSQL version to 12' (#786) from Oneric/akkoma:psql-min-ver into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/786
2024-06-07 16:53:22 +00:00
Oneric
2180d068ae Raise log level for start failures 2024-06-07 16:21:21 +02:00
Oneric
a3840e7d1f Raise minimum PostgreSQL version to 12
This lets us:
 - avoid issues with broken hash indices for PostgreSQL <10
 - drop runtime checks and legacy codepaths for <11 in db search
 - always enable custom query plans for performance optimisation

PostgreSQL 11 is already EOL since 2023-11-09, so
in theory everyone should already have moved on to 12 anyway.
2024-06-07 16:21:09 +02:00
Oneric
b17d3dc6d8 Fix changelog
Apparently got jumbled during some rebase(s)
2024-06-07 16:20:34 +02:00
floatingghost
f8f364d36d Merge pull request 'Handle errors from HTTP requests gracefully' (#791) from wp-embeds into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/791
2024-06-07 12:58:58 +00:00
floatingghost
329d8fcba8 Merge pull request 'Update PGTune recommendations' (#795) from norm/akkoma:pgtune into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/795
2024-06-07 12:57:00 +00:00
Norm
e2860e5292 Update PGTune recommendations
From experience, setting DB type to "Online transaction processing
system" seems to give the most optimal configuration in terms of
performance.

I also increased the recomended max connections to 25-30 as that leaves
some room for maintenance tasks to run without running out of
connections.

Finally, I removed the example configs since they're probably out of
date and I think it's better to direct people to use PGTune instead.
2024-06-06 12:18:51 -04:00
Oneric
df27567d99 mrf/steal_emoji: display download_unknown_size in admin-fe
Fixes omission in d6d838cbe8
2024-06-05 20:14:10 +02:00
Oneric
be5440c5e8 mrf/steal_emoji: fix size limit check
Headers are strings, but this expected to already get an int
thus always failing the comparison if the header was set.

Fixes mistake in d6d838cbe8
2024-06-05 20:11:53 +02:00
Oneric
68fe0a9633 test: fix content-length value type
All headers are strings, always.
In this case it didn't matter atm,
but let’s not provide confusing examples.
2024-06-05 19:59:59 +02:00
Floatingghost
0f65dd3ebe remove pointless logger 2024-06-04 14:34:59 +01:00
Floatingghost
38d09cb0ce remove now-pointless clause 2024-06-04 14:34:18 +01:00
Floatingghost
c9a03af7c1 Move rescue to the HTTP request itself 2024-06-04 14:30:16 +01:00
Floatingghost
0f7ae0fa21 am i baka 2024-06-04 14:26:33 +01:00
Floatingghost
30e13a8785 Don't error on rich media fail 2024-06-04 14:21:40 +01:00
Floatingghost
778b213945 enqueue pin fetches after changeset validation 2024-06-01 08:25:35 +01:00
Oneric
bed7ff8e89 mix: consistently use shell_info and shell_error
Logger output being visible depends on user configuration, but most of
the prints in mix tasks should always be shown. When running inside a
mix shell, it’s probably preferable to send output directly to it rather
than using raw IO.puts and we already have shell_* functions for this,
let’s use them everywhere.
2024-05-31 17:17:42 +02:00
Oneric
70cd5f91d8 dbprune/activites: prune array activities first
This query is less costly; if something goes wrong or gets aborted later
at least this part will arelady be done.
2024-05-31 17:16:40 +02:00
Oneric
aeaebb566c dbprune: allow splitting array and single activity prunes
The former is typically just a few reports; it doesn't make sense to
rerun it over and over again in batched prunes or if a full prune OOMed.
2024-05-31 17:16:40 +02:00
Oneric
5751637926 dbprune: use query! 2024-05-31 17:16:40 +02:00
Oneric
24bab63cd8 dbprune: add more logs
Pruning can go on for a long time; give admins some insight into that
something is happening to make it less frustrating and to make it easier
which part of the process is stalled should this happen.

Again most of the changes are merely reindents;
review with whitespace changes hidden recommended.
2024-05-31 17:16:40 +02:00
Oneric
1d4c212441 dbprune: shortcut array activity search
This brought down query costs from 7,953,740.90 to 47,600.97
2024-05-31 17:16:40 +02:00
Oneric
6e7cbf1885 Test both standalone and flag mode for pruning orphaned activities 2024-05-31 17:16:40 +02:00
Oneric
225f87ad62 Also allow limiting the initial prune_object
May sometimes be helpful to get more predictable runtime
than just with an age-based limit.

The subquery for the non-keep-threads path is required
since delte_all does not directly accept limit().

Again most of the diff is just adjusting indentation, best
hide whitespace-only changes with git diff -w or similar.
2024-05-31 17:16:40 +02:00
Oneric
e64f031167 Log number of deleted rows in prune_orphaned_activities
This gives feedback when to stop rerunning limited batches.

Most of the diff is just adjusting indentation; best reviewed
with whitespace-only changes hidden, e.g. `git diff -w`.
2024-05-31 17:16:40 +02:00
Oneric
fa52093bac Add standalone prune_orphaned_activities CLI task
This part of pruning can be very expensive and bog down the whole
instance to an unusable sate for a long time. It can thus be desireable
to split it from prune_objects and run it on its own in smaller limited batches.

If the batches are smaller enough and spaced out a bit, it may even be possible
to avoid any downtime. If not, the limit can still help to at least make the
downtime duration somewhat more predictable.
2024-05-31 17:16:40 +02:00
Oneric
3126d15ffc refactor: move prune_orphaned_activities into own function
No logic changes. Preparation for standalone orphan pruning.
2024-05-31 17:16:39 +02:00
floatingghost
8f97c15b07 Merge pull request 'Preserve Meilisearch’s result ranking' (#772) from Oneric/akkoma:search-meili-order into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/772
2024-05-31 14:12:05 +00:00
Floatingghost
3af0c53a86 use proper workers for fetching pins instead of an ad-hoc task (#788)
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/788
Co-authored-by: Floatingghost <hannah@coffee-and-dreams.uk>
Co-committed-by: Floatingghost <hannah@coffee-and-dreams.uk>
2024-05-31 08:58:52 +00:00
Oneric
fc7e07f424 meilisearch: enable using search_key
Using only the admin key works as well currently
and Akkoma needs to know the admin key to be able
to add new entries etc. However the Meilisearch
key descriptions suggest the admin key is not
supposed to be used for searches, so let’s not.

For compatibility with existings configs, search_key remains optional.
2024-05-29 23:17:27 +00:00
Oneric
59685e25d2 meilisearch: show keys by name not description
This makes show-key’s output match our documentation as of Meilisearch
1.8.0-8-g4d5971f343c00d45c11ef0cfb6f61e83a8508208. Since I’m not sure
if older versions maybe only provided description, it will fallback to
the latter if no name parameter exists.
2024-05-29 23:17:27 +00:00
Oneric
65aeaefa41 meilisearch: respect meili’s result ranking
Meilisearch is already configured to return results sorted by a
particular ranking configured in the meilisearch CLI task.
Resorting the returned top results by date partially negates this and
runs counter to what someone with tweaked settings expects.

Issue and fix identified by AdamK2003 in
https://akkoma.dev/AkkomaGang/akkoma/pulls/579
But instead of using a O(n^2) resorting, this commit directly
retrieves results in the correct order from the database.

Closes: https://akkoma.dev/AkkomaGang/akkoma/pulls/579
2024-05-29 23:17:27 +00:00
Oneric
5d6cb6a459 meilisearch: remove duplicate preload 2024-05-29 23:17:27 +00:00
floatingghost
8afc3bee7a Merge pull request 'Use /var/tmp for media cache path' (#776) from norm/akkoma:nginx-var-tmp into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/776
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
2024-05-28 02:05:17 +00:00
floatingghost
72871d4514 Merge pull request 'Drop unused indices' (#767) from Oneric/akkoma:purge-unused-indices into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/767
2024-05-28 01:35:18 +00:00
floatingghost
72af38c0e9 Merge pull request 'migrate CI config to v2' (#785) from woodpecker-v2 into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/785
2024-05-27 03:32:40 +00:00
Floatingghost
ae19fd90c9 use elixir 1.16 for format checks 2024-05-27 04:07:44 +01:00
Floatingghost
66b3248dd3 mix tests probably shouldn't be async 2024-05-27 04:03:13 +01:00
Floatingghost
73ead8656a don't allow emoji formatter to be async 2024-05-27 03:25:18 +01:00
Floatingghost
f32a7fd76a arch is aarch64 now 2024-05-27 03:02:02 +01:00
Floatingghost
4078fd655c migrate CI config to v2 2024-05-27 02:56:05 +01:00
floatingghost
5bdef8c724 Merge pull request 'Allow for attachment to be a single object in user data' (#783) from single-attachment into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/783
2024-05-27 01:44:53 +00:00
floatingghost
cdc918c8f1 Merge pull request 'Document AP and nodeinfo extensions' (#778) from Oneric/akkoma:doc_ap-extensions into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/778
2024-05-27 01:34:58 +00:00
Floatingghost
f15eded3e1 Add extra test case for nonsense field, increase timeouts 2024-05-27 02:09:48 +01:00
Oneric
05eda169fe Document AP and nodeinfo extensions
And while add it point to this via a top-level
FEDERATION.md document as standardised by FEP-67ff.

Also add a few missing descriptions to the config cheatsheet
and move the recently removed C2S extension into an appropiate
subsection.
2024-05-26 19:04:06 +02:00
floatingghost
3ce855cbde Merge pull request 'Fix Exiftool stderr being read as an image description' (#782) from norm/akkoma:fix-exiftool-description into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/782
2024-05-26 16:11:12 +00:00
Floatingghost
da67e69af5 Allow for attachment to be a single object in user data 2024-05-26 17:09:26 +01:00
Norm
c2d3221be3 Fix Exiftool stderr being read as an image description
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/773
2024-05-23 14:44:17 -04:00
Floatingghost
5e92f955ac bump version 2024-05-22 19:42:25 +01:00
Floatingghost
b72127b45a Merge remote-tracking branch 'oneric-sec/media-owner' into develop 2024-05-22 19:36:10 +01:00
Oneric
9a91299f96 Don't try to handle non-media objects as media
Trying to display non-media as media crashed the renderer,
but when posting a status with a valid, non-media object id
the post was still created, but then crashed e.g. timeline rendering.
It also crashed C2S inbox reads, so this could not be used to leak
private posts.
2024-05-22 20:30:23 +02:00
Oneric
fbd961c747 Drop activity_type override for uploads
Afaict this was never used, but keeping this (in theory) possible
hinders detecting which objects are actually media uploads and
which proper ActivityPub objects.

It was originally added as part of upload support itself in
02d3dc6869 without being used
and `git log -S:activity_type` and `git log -Sactivity_type:`
don't find any other commits using this.
2024-05-22 20:30:23 +02:00
Oneric
0c2b33458d Restrict media usage to owners
In Mastodon media can only be used by owners and only be associated with
a single post. We currently allow media to be associated with several
posts and until now did not limit their usage in posts to media owners.
However, media update and GET lookup was already limited to owners.
(In accordance with allowing media reuse, we also still allow GET
lookups of media already used in a post unlike Mastodon)

Allowing reuse isn’t problematic per se, but allowing use by non-owners
can be problematic if media ids of private-scoped posts can be guessed
since creating a new post with this media id will reveal the uploaded
file content and alt text.
Given media ids are currently just part of a sequentieal series shared
with some other objects, guessing media ids is with some persistence
indeed feasible.

E.g. sampline some public media ids from a real-world
instance with 112 total and 61 monthly-active users:

  17.465.096  at  t0
  17.472.673  at  t1 = t0 + 4h
  17.473.248  at  t2 = t1 + 20min

This gives about 30 new ids per minute of which most won't be
local media but remote and local posts, poll answers etc.
Assuming the default ratelimit of 15 post actions per 10s, scraping all
media for the 4h interval takes about 84 minutes and scraping the 20min
range mere 6.3 minutes. (Until the preceding commit, post updates were
not rate limited at all, allowing even faster scraping.)
If an attacker can infer (e.g. via reply to a follower-only post not
accessbile to the attacker) some sensitive information was uploaded
during a specific time interval and has some pointers regarding the
nature of the information, identifying the specific upload out of all
scraped media for this timerange is not impossible.

Thus restrict media usage to owners.

Checking ownership just in ActivitDraft would already be sufficient,
since when a scheduled status actually gets posted it goes through
ActivityDraft again, but would erroneously return a success status
when scheduling an illegal post.

Independently discovered and fixed by mint in Pleroma
1afde067b1
2024-05-22 20:30:18 +02:00
Floatingghost
842cac2a50 ensure we mock_global 2024-05-22 19:30:03 +01:00
Lain Soykaf
3e1f5e5372 WebFingerControllerTest: Restore host after test. 2024-05-22 19:27:51 +01:00
marcin mikołajczak
3a21293970 Fix tests
Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
2024-05-22 19:27:31 +01:00
marcin mikołajczak
0d66237205 Fix validate_webfinger when running a different domain for Webfinger
Signed-off-by: marcin mikołajczak <git@mkljczk.pl>
2024-05-22 19:20:02 +01:00
Oneric
6ef6b2a289 Apply rate limits to status updates 2024-05-22 20:18:08 +02:00
Oneric
94e9c8f48a Purge unused media description update on post
In MastoAPI media descriptions are updated via the
media update API not upon post creation or post update.

This functionality was originally added about 6 years ago in
ba93396649 which was part of
https://git.pleroma.social/pleroma/pleroma/-/merge_requests/626 and
https://git.pleroma.social/pleroma/pleroma-fe/-/merge_requests/450.
They introduced image descriptions to the front- and backend,
but predate adoption of Mastodon API.

For a while adding an `descriptions` array on post creation might have
continued to work as an undocumented Pleroma extension to Masto API, but
at latest when OpenAPI specs were added for those endpoints four years
ago in 7803a85d2c, these codepaths ceased
to be used. The API specs don’t list a `descriptions` parameter and
any unknown parameters are stripped out.

The attachments_from_ids function is only called from
ScheduledActivity and ActivityDraft.create with the latter
only being called by CommonAPI.{post,update} whihc in turn
are only called from ScheduledActivity again, MastoAPI controller
and without any attachment or description parameter WelcomeMessage.
Therefore no codepath can contain a descriptions parameter.
2024-05-22 20:18:08 +02:00
Oneric
873aa9da1c activity_draft: mark new/2 as private 2024-05-22 20:18:08 +02:00
Oneric
34a48cb87f scheduled_activity: mark private functions as private
And remove unused due_activities/1
2024-05-22 20:18:08 +02:00
lain
50403351f4 add impostor test for webfinger 2024-05-22 19:17:34 +01:00
Alex Gleason
a953b1d927 Prevent spoofing webfinger 2024-05-22 19:08:37 +01:00
Norm
bb29c5bed2 Update tor/i2p guide
Direct users to add in the appropriate headers and update the listening
port instead of copy/pasting a config that's already outdated and
probably would otherwise have to be synced with the main example nginx
config.
2024-05-16 19:08:02 -04:00
Norm
bc46f3da4c Update mediaproxy howto
Since the configuration options on the nginx side already exist in the
sample config, there's no need to tell users to copy-paste those
settings in again.
2024-05-16 19:06:59 -04:00
Norm
7e709768c3 Use /var/tmp for media cache path in apache/nginx configs
The /var/tmp directory is not mounted as tmpfs unlike /tmp which is
mounted as such on some distros like Fedora or Arch. Since there isn't
really a benefit to having the cache on tmpfs, this change should allow
for a larger cache if needed without worrying about running out of RAM.
2024-05-15 20:42:48 -04:00
floatingghost
76ded10a70 Merge pull request 'Backoff on HTTP requests when 429 is recieved' (#762) from backoff-http into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/762
2024-05-11 04:38:47 +00:00
Floatingghost
4457928e32 duct-tape fix for #438
we really need to make this less manual
2024-05-11 05:30:18 +01:00
floatingghost
ee03149ba1 Merge pull request 'Fix Exiftool migration id' (#763) from Oneric/akkoma:fix-migration-timeline-exifdesc into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/763
2024-05-06 22:51:05 +00:00
Floatingghost
ea6bc8a7c5 add a test for 503-rate-limiting 2024-05-06 23:36:00 +01:00
Floatingghost
bd74693db6 additionally support retry-after values 2024-05-06 23:34:48 +01:00
Oneric
5256678901 Fix Exiftool migration id
Applying works fine with a 20220220135625 version, but it won’t be
rolled back in the right order. Fortunately this action is idempotent
so we can just rename and reapply it with a new id.

To also not break large-scale rollbacks past 2022 for anyone
who already applied it with the old id, keep a stub migration.
2024-05-07 00:16:21 +02:00
floatingghost
fdeecc7b4c Merge pull request 'Update clients list in docs' (#761) from norm/akkoma:docs-clients-update into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/761
2024-05-06 21:33:26 +00:00
floatingghost
51482c4fe8 Merge pull request 'Remove remaining Dokku files' (#766) from norm/akkoma:remove-dokku into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/766
2024-05-06 21:33:16 +00:00
Oneric
b7e3d44756 Drop unused indices
This promotes and expands our existing optional migration.
Based on usage statistics from several instances, see:
https://akkoma.dev/AkkomaGang/akkoma/issues/764

activities_hosts is now retained after all since it’s essential
for the "instance" query parameter of *oma’s public timeline to
reliably work in a reasonable amount of time. (Although akkoma-fe has
no support for this feature and apparently barely anyone uses it.)

activities_actor_index was already dropped before in
20221211234352_remove_unused_indices; no need to drop it again.

Birthday indices were introduced in pleroma starting with
20220116183110_add_birthday_to_users which is past the
last common migration 20210416051708.
2024-05-02 00:08:33 +02:00
Norm
8ae54b260a Remove remaining Dokku files 2024-04-29 13:45:58 -04:00
Floatingghost
21a81e1111 version bump with translations 2024-04-27 15:10:52 +01:00
Floatingghost
3738ab67bd Merge remote-tracking branch 'origin/translations' into develop 2024-04-27 15:10:23 +01:00
Floatingghost
7038b60ab5 bump version 2024-04-27 15:08:21 +01:00
Norm
549d580054 Add Enafore to clients list 2024-04-26 15:21:58 -04:00
Floatingghost
010e8c7bb2 where were you when lint fail 2024-04-26 19:28:01 +01:00
Floatingghost
9671cdecdf changelog entry 2024-04-26 19:10:17 +01:00
Floatingghost
f531484063 Merge branch 'develop' into backoff-http 2024-04-26 19:06:18 +01:00
Floatingghost
ec7e9da734 Correct ttl syntax for new cachex 2024-04-26 19:05:12 +01:00
FloatingGhost
3c384c1b76 Add ratelimit backoff to HTTP get 2024-04-26 19:01:12 +01:00
FloatingGhost
2437a3e9ba add test for backoff 2024-04-26 19:01:01 +01:00
FloatingGhost
ad7dcf38a8 Add HTTP backoff cache to respect 429s 2024-04-26 19:00:35 +01:00
Weblate
91d9d750c0 Update translation files
Updated by "Squash Git commits" hook in Weblate.

Translation: Pleroma fe/Akkoma Backend (Static pages)
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-static-pages/
2024-04-26 17:49:40 +00:00
Weblate
3c07aa506d Translated using Weblate (Chinese (Traditional))
Currently translated at 100.0% (91 of 91 strings)

Co-authored-by: Toot <toothpicker@users.noreply.translate.akkoma.dev>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-static-pages/zh_Hant/
Translation: Pleroma fe/Akkoma Backend (Static pages)
2024-04-26 17:49:40 +00:00
Weblate
64050b0fb5 Update translation files
Updated by "Squash Git commits" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-errors/
Translation: Pleroma fe/Akkoma Backend (Errors)
2024-04-26 17:49:40 +00:00
Weblate
7babc11475 Update translation files
Updated by "Squash Git commits" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-errors/
Translation: Pleroma fe/Akkoma Backend (Errors)
2024-04-26 17:49:40 +00:00
Floatingghost
828158ef49 Merge remote-tracking branch 'oneric/fedfix-public-ld' into develop 2024-04-26 18:49:31 +01:00
Floatingghost
c7276713e0 Merge remote-tracking branch 'oneric/changelog-3.13' into develop 2024-04-26 18:43:39 +01:00
floatingghost
310c1b7e24 Merge pull request 'Change nginx cache size to 1 GiB' (#759) from norm/akkoma:nginx-cache-size into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/759
2024-04-26 17:40:23 +00:00
floatingghost
7da6f41718 Merge pull request 'Exiftool: Strip all non-essential metadata tags' (#745) from Oneric/akkoma:exiftool-strip-all into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/745
2024-04-26 17:38:47 +00:00
floatingghost
53c67993bb Merge pull request 'Remove unused top level files' (#760) from norm/akkoma:remove-unused-files into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/760
2024-04-26 17:24:21 +00:00
Oneric
5bc64c5753 changelog: add note about StripMetadata and ReadDescription order 2024-04-26 18:57:28 +02:00
Oneric
5ee0fb18cb exiftool: make stripped tags configurable 2024-04-26 18:57:24 +02:00
Norm
771a306dc1 Update clients list in docs
- Warn that the apps here are not officially supported
- Update Kaiteki's social profile
- Remove Fedi App
- Add Subway Tooter
2024-04-26 04:17:17 -04:00
Norm
5b320616ca Remove unused top level files
I don't think anyone really uses the tools that uses these files these
days, and they are another thing that needs to be updated every so
often.
2024-04-26 02:33:18 -04:00
Norm
72c2d9f009 Change nginx cache size to 1 GiB
The current 10 GiB cache size is too large to fit into tmpfs for VMs and
other machines with smaller RAM sizes. Most non-Debian distros mount
/tmp on tmpfs.
2024-04-26 01:43:44 -04:00
Oneric
12db5c23f2 Add missing changelog entries 2024-04-26 00:51:45 +02:00
Oneric
a95af3ee4c exiftool: strip all non-essential tags
Documentation was already clear on this only stripping GPS tags.
But there are more potentially sensitive metadata tags (e.g. author
and possibly description) and the name alone suggests a broader effect.

Thus change the filter to strip all metadata except for colourspace info
and orientation (technically it strips everything and then readds
selected tags).

Explicitly stripping CommonIFD0 is needed since -all does not modify
IFD0 due to TIFF storing some actual image data there. CommonIFD0 then
strips a bunch of commonly used actual metadata tags from IFD0, to my
understanding leaving TIFF image data and custom metadata tags intact.
2024-04-25 23:00:42 +02:00
Oneric
163cb1d5e0 exiftool: strip JXL and HEIC
As of exiftool 12.57 both formats are supported, but EXIF data is
optional for JXL and if exiftool doesn’t find a preexisting metadata
chunk it will create one and treat it as a minor error resulting in
a non-zero exit code.
Setting -ignoreMinorErrors avoids failing on such uploads.
2024-04-25 23:00:42 +02:00
Oneric
24e608ab5b docs: fix typo 2024-04-25 23:00:42 +02:00
Oneric
b0a46c1e2e Normalise public adressing to fix federation
Due to JSON-LD compaction the full address of public scope
may also occur in shorter forms and the spec requires us to treat them
all equivalently. To save us the pain of repeatedly checking for all
variants internally, normalise inbound data to just one form.
See note at: https://www.w3.org/TR/activitypub/#public-addressing

This needs to happen very early, even before the other addressing fixes
else an earlier validator will reject the object. This in turn required
to move the list-tpye normalisation earlier as well, but since I was
unsure about putting empty lists into the data when no such field
existed before, I excluded this case and thus the later fixing had to be
kept as well.

Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/670
2024-04-25 18:45:16 +02:00
floatingghost
b1c6621e66 Merge pull request 'Read image description from EXIF data' (#744) from timorl/akkoma:elseinspe into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/744
2024-04-25 12:52:31 +00:00
floatingghost
764dbeded4 Merge pull request 'Accept all standard actor types' (#751) from Oneric/akkoma:all-actor-types into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/751
2024-04-24 17:09:02 +00:00
floatingghost
06847ca5f8 Merge pull request 'Update nginx config and install docs to use certbot's nginx plugin' (#752) from norm/akkoma:docs-nginx-certbot into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/752
2024-04-24 17:08:39 +00:00
floatingghost
80e1c094c7 Merge pull request 'Don't strip newlines in pre' (#709) from snan/akkoma:pre into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/709
2024-04-24 17:00:34 +00:00
floatingghost
4a0e90e8a8 Merge pull request 'ReceiverWorker: Make sure non-{:ok, _} is returned as {:error, …}' (#753) from Oneric/akkoma:receive-worker-return into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/753
2024-04-24 17:00:18 +00:00
floatingghost
1e48a37545 Merge pull request 'Remove unused AP C2S endpoints' (#749) from who-wants-to-yeet-c2s-i-want-to-yeet-c2s into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/749
2024-04-24 16:59:58 +00:00
Oneric
83f75c3e93 Accept all standard actor types 2024-04-23 18:14:34 +02:00
floatingghost
7d89dba528 Merge pull request 'Fix flaky expires_at tests' (#754) from Oneric/akkoma:test-flaky-expires_at into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/754
2024-04-23 15:14:21 +00:00
Floatingghost
92168fa5a1 Merge remote-tracking branch 'origin/develop' into who-wants-to-yeet-c2s-i-want-to-yeet-c2s 2024-04-23 14:37:05 +01:00
Floatingghost
3e199242b0 remove upload_media from AP representation 2024-04-23 14:35:52 +01:00
Norm
0fa3fbf55e Update OTP install docs to use certbot nginx plugin 2024-04-23 00:02:54 -04:00
Norm
e5f4282cca Update certbot instructions for Alpine Linux 2024-04-23 00:02:54 -04:00
Norm
cdde95ad8b Update gentoo install guide to use certbot-nginx 2024-04-23 00:02:54 -04:00
Norm
c493769364 Update Nginx setup docs for Fedora and Red Hat OTP 2024-04-23 00:02:15 -04:00
Norm
39b8e73532 Update docs for Arch Linux nginx setup
Alongside moving to certbot's nginx plugin, also use conf.d instead of
recreating the sites-{available,enabled} setup that Debian/Ubuntu uses.

Furthermore, also request a certificate for the media domain at the same
time since that's now required.
2024-04-21 18:19:07 -04:00
Norm
5405828ab1 Update debian install docs to use certbot nginx plugin 2024-04-21 18:19:07 -04:00
Norm
3e9643b172 Update nginx config for Certbot's nginx plugin 2024-04-21 18:19:01 -04:00
Oneric
20c22eb159 Fix flaky expires_at tests
The API parameter is not a timestamp but an offset.
If a sufficient amount of time passes between the tests
expires_at calculation and the internal calculation during processing
of the request the strict equality assertion fails. (Either a direct
assertion or indirect via job lookup).

To avoid this lower comparison granularity.
2024-04-21 21:08:53 +00:00
Haelwenn (lanodan) Monnier
0c2f200b4d ReceiverWorker: Make sure non-{:ok, _} is returned as {:error, …}
Otherwise an error like `{:signature, {:error, {:error, :not_found}}}`
ends up considered a success.

Cherry-picked-from: a299ddb10e
2024-04-21 20:58:06 +02:00
timorl
3f54945033
Fix the one test that wasn't just being flaky 2024-04-21 19:43:26 +02:00
timorl
09d3ccf770
Read description before stripping metadata 2024-04-19 20:51:54 +02:00
timorl
9da0fe930e
Format, but this time with a non-ancient version of elixir 2024-04-19 18:07:50 +02:00
timorl
2a9db73b4c
Merge branch 'develop' into elseinspe 2024-04-19 17:11:55 +02:00
floatingghost
0fee71f58f Merge pull request 'Handle failed fetches a bit better' (#743) from failed-fetch-processing into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/743
2024-04-19 11:25:14 +00:00
Floatingghost
370576474c only consider :op and :id args in duplicate checks 2024-04-19 11:39:27 +01:00
Floatingghost
1ed975636b Keep READ endpoints, purge WRITE 2024-04-19 11:06:01 +01:00
timorl
cd7af81896
Rename StripLocation to StripMetadata for temporal-proofing reasons 2024-04-16 20:37:00 +02:00
Floatingghost
2c7e5b2287 changelog entry 2024-04-16 13:57:05 +01:00
Floatingghost
ddb8a5ef73 yeet AP C2S support
literally nothing uses C2S AP, and it's another route into core
systems which requires analysis and maintenance. A second API
is just extra surface for potentially bad things so let's take
it out back and obliterate it
2024-04-16 13:55:03 +01:00
Floatingghost
123db1abc4 Merge branch 'develop' into failed-fetch-processing 2024-04-16 12:35:54 +01:00
Floatingghost
b2c29527fb make xmerl shut up about markup 2024-04-16 10:19:30 +01:00
timorl
59d32c10d9
Formatting 2024-04-16 08:02:13 +02:00
Floatingghost
d2cee15c15 mix format says no 2024-04-16 03:07:28 +01:00
Floatingghost
d70fa16383 oban options should be a keyword list 2024-04-16 02:58:50 +01:00
Floatingghost
5043571084 Enable oban job uniqueness
by default just prevent job floods with a 1-seconds
uniqueness check, but override in RemoteFetcherWorker
for 5 minute uniqueness check over all states

:infinity is an option we can go for maybe at some point,
but that would prevent any refetches so maybe not idk.
2024-04-16 02:53:24 +01:00
Floatingghost
1896ff1ab0 changelog entry 2024-04-16 02:35:59 +01:00
Floatingghost
b7dd739de1 Make sure we return the right format for oban 2024-04-16 02:35:21 +01:00
timorl
b144218dce
Merge branch 'develop' into elseinspe 2024-04-14 20:31:33 +02:00
Floatingghost
2fc25980d1 fix pattern matching in fetch errors 2024-04-13 23:55:26 +01:00
floatingghost
c1f0b6b875 Merge pull request 'Accept body parameters for /api/pleroma/notification_settings' (#738) from Oneric/akkoma:notif-setting-parameters into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/738
2024-04-13 22:55:02 +00:00
Floatingghost
18442dcc7e Fix quote test 2024-04-13 23:05:52 +01:00
Floatingghost
33fb74043d Bring our adjustments into line with atom-failure 2024-04-13 22:56:04 +01:00
Floatingghost
49ed27cd96 require logger 2024-04-13 22:25:31 +01:00
Floatingghost
7f6e35ece4 formatting 2024-04-12 20:33:33 +01:00
Mark Felder
2e369aef71 Allow the Remote Fetcher to attempt fetching an unreachable instance 2024-04-12 20:33:21 +01:00
Mark Felder
fed7a78c77 Oban jobs should be discarded on permanent errors 2024-04-12 20:33:17 +01:00
Mark Felder
c0532bcae0 Handle 401s as I have observed it in the wild 2024-04-12 20:33:11 +01:00
Mark Felder
f31b262aec Improve test descriptions 2024-04-12 20:32:38 +01:00
Mark Felder
ff515c05c3 Prevent requeuing Remote Fetcher jobs that exceed thread depth 2024-04-12 20:32:31 +01:00
Mark Felder
7e5004b3e2 Leverage existing atoms as return errors for the object fetcher 2024-04-12 20:32:13 +01:00
Mark Felder
53a9413b95 Formatting 2024-04-12 20:31:40 +01:00
Mark Felder
d69cba1b93 Remove duplicate log messages from Transmogrifier
Object fetch errors are logged in the fetcher module
2024-04-12 20:31:31 +01:00
Mark Felder
3c54f407c5 Conslidate log messages for object fetcher failures and leverage Logger.metadata 2024-04-12 20:30:38 +01:00
Mark Felder
825ae46bfa Set Logger level to error 2024-04-12 20:29:33 +01:00
Mark Felder
331710b6bb RemoteFetcherWorker Oban job tests 2024-04-12 20:29:28 +01:00
Mark Felder
eeed051a0f Fix detection of user follower collection being private
We were overzealous with matching on a raw error from the object fetch that should have never been relied on like this. If we can't fetch successfully we should assume that the collection is private.

Building a more expressive and universal error struct to match on may be something to consider.
2024-04-12 20:29:11 +01:00
Mark Felder
30d63aaa6e Revert "Mark instances as unreachable when returning a 403 from an object fetch"
This reverts commit d472bafec19cee269e7c943bafae7c805785acd7.
2024-04-12 20:28:56 +01:00
Mark Felder
e2b04fac5a Skip remote fetch jobs for unreachable instances 2024-04-12 20:28:36 +01:00
Mark Felder
6d368808d3 Remove mistaken duplicate fetch 2024-04-12 20:28:31 +01:00
Mark Felder
160d113b30 Changelogs 2024-04-12 20:28:26 +01:00
Mark Felder
132036f951 Cancel remote fetch jobs for deleted objects 2024-04-12 20:28:21 +01:00
Mark Felder
4ff22a409a Consolidate the HTTP status code checking into the private get_object/1 2024-04-12 20:28:16 +01:00
Mark Felder
4c29366fe5 Mark instances as unreachable when returning a 403 from an object fetch
This is a definite sign the instance is blocked and they are enforcing authorized_fetch
2024-04-12 20:27:33 +01:00
Mark Felder
ac4cc619ea Fix Transmogrifier tests
These tests relied on the removed Fetcher.fetch_object_from_id!/2 function injecting the error tuple into a log message with the exact words "Object containment failed."

We will keep this behavior by generating a similar log message, but perhaps this should do a better job of matching on the error tuple returned by Transmogrifier.handle_incoming/1
2024-04-12 20:26:56 +01:00
Mark Felder
c241b5b09f Remove Fetcher.fetch_object_from_id!/2
It was only being called once and can be replaced with a case statement.
2024-04-12 20:26:28 +01:00
Floatingghost
f8a53fbe2f bump dependencies 2024-04-12 19:59:30 +01:00
floatingghost
e36c0f96fc Merge pull request 'Add docker override file to docs and gitignore' (#621) from norm/akkoma:docker-compose-override into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/621
2024-04-12 18:50:25 +00:00
floatingghost
6f3c955aa0 Merge pull request 'elixir1.16 testing' (#742) from elixir1.16 into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/742
2024-04-12 18:49:33 +00:00
floatingghost
024ffadd80 Merge pull request 'Don't list old accounts as aliases in WebFinger' (#713) from erincandescent/akkoma:no-old-account-alias into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/713
2024-04-12 18:34:14 +00:00
floatingghost
e2e4f53585 Merge pull request 'Use standard-compliant Accept header when fetching' (#740) from Oneric/akkoma:fetch_std-accept-hdr into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/740
2024-04-12 18:28:26 +00:00
Floatingghost
d910e8d7d1 Add test suite for elixir1.16 2024-04-12 19:13:33 +01:00
Floatingghost
df25d86999 Cleaned up FEP-fffd commits a bit 2024-04-12 18:50:57 +01:00
floatingghost
4887df12d7 Merge pull request 'Allow for url to be a list' (#718) from helge/akkoma:develop into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/718
2024-04-12 17:39:38 +00:00
floatingghost
e6ca2b4d2a Merge pull request 'Fix array-less EmojiReacts' (#739) from Oneric/akkoma:tag-arrayless into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/739
2024-04-12 17:26:07 +00:00
floatingghost
6ba80aaff5 Merge pull request 'Check if data is visible before embedding it in OG tags' (#741) from ograph-restrictions into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/741
2024-04-12 17:22:59 +00:00
floatingghost
8e60177466 Merge pull request 'MRF.InlineQuotePolicy: Add link to post URL, not ID' (#733) from erincandescent/akkoma:quote-url into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/733
2024-04-12 17:02:52 +00:00
Erin Shepherd
75d9e2b375 MRF.InlineQuotePolicy: Add link to post URL, not ID
"id" is used for the canonical link to the AS2 representation of an object.
"url" is typically used for the canonical link to the HTTP representation.
It is what we use, for example, when following the "external source" link
in the frontend. However, it's not the link we include in the post contents
for quote posts.

Using URL instead means we include a more user-friendly URL for Mastodon,
and a working (in the browser) URL for Threads
2024-04-12 13:23:50 +02:00
Floatingghost
05f8179d08 check if data is visible before embedding it in OG tags
previously we would uncritically take data and format it into
tags for static-fe and the like - however, instances can be
configured to disallow unauthenticated access to these resources.

this means that OG tags as a vector for information leakage.

_technically_ this should only occur if you have both
restrict_unauthenticated *AND* you run static-fe, which makes no
sense since static-fe is for unauthenticated people in particular,
but hey ho.
2024-04-12 05:16:47 +01:00
Oneric
fae0a14ee8 Use standard-compliant Accept header when fetching
Spec says clients MUST use this header and servers MUST respond to it,
while servers merely SHOULD respond to the one we used before.
https://www.w3.org/TR/activitypub/#retrieving-objects

The old value is kept as a fallback since at least two years ago
not every implementation correctly dealt with the spec-compliant
variant, see: https://github.com/owncast/owncast/issues/1827

Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/730
2024-04-12 00:22:37 +02:00
Floatingghost
1135935cbe Merge remote-tracking branch 'oneric/ipv6' into develop 2024-04-11 20:59:49 +01:00
floatingghost
090a77d1af Merge pull request 'static-fe: don’t squeeze non-square images' (#705) from Oneric/akkoma:staticfe-nonsquare-img into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/705
2024-04-11 18:43:03 +00:00
floatingghost
0e066bddae Merge pull request 'Drop base_url special casing in test env' (#737) from Oneric/akkoma:testenv_drop_baseurl_specialcase into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/737
2024-04-11 18:24:09 +00:00
Oneric
bd74ad9ce4 Accept body parameters for /api/pleroma/notification_settings
This brings it in line with its documentation and akkoma-fe’s
expectations. For backwards compatibility URL parameters are still
accept with lower priority. Unfortunately this means duplicating
parameters and descriptions in the API spec.

Usually Plug already pre-merges parameters from different sources into
the plain 'params' parameter which then gets forwarded by Phoenix.
However, OpenApiSpex 3.x prevents this; 4.x is set to change this
  https://github.com/open-api-spex/open_api_spex/issues/334
  https://github.com/open-api-spex/open_api_spex/issues/92

Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/691
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/722
2024-04-09 04:11:28 +02:00
Oneric
462225880a Accept EmojiReacts with non-array tag
JSON-LD compaction strips the array since it’s just one object

Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/720
2024-04-09 04:04:16 +02:00
Oneric
debd686418 Add tests for our own custom emoji format 2024-04-09 03:52:22 +02:00
Oneric
9598137d32 Drop base_url special casing in test env
61621ebdbc already explicitly added
the uploader base url to config/test.exs and it reduces differences
from prod.
2024-04-07 00:20:12 +02:00
floatingghost
b8393ad9ed Merge pull request 'context: add featured definition' (#717) from erincandescent/akkoma:context-featured into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/717
2024-04-03 10:22:09 +00:00
floatingghost
554f19a9ed Merge pull request 'Refresh Users much more aggressively when processing Move activities' (#714) from erincandescent/akkoma:move-bust-cache into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/714
2024-04-03 10:03:14 +00:00
FloatingGhost
9c53a3390e Ensure we have the emoji base path 2024-04-02 14:12:03 +01:00
FloatingGhost
795524daf1 bump version 2024-04-02 11:36:47 +01:00
FloatingGhost
b5d97e7d85 Don't error out if we're not using the local uploader 2024-04-02 11:36:26 +01:00
FloatingGhost
f592090206 Fix tests that relied on no base_url in the uploader 2024-04-02 11:23:57 +01:00
FloatingGhost
61621ebdbc Add tests for extra warnings about media subdomains 2024-04-02 10:54:53 +01:00
FloatingGhost
4cd299bd83 Add extra warnings if the uploader is on the same domain as the main application 2024-04-02 10:20:59 +01:00
Erin Shepherd
8fbd771d6e context: add featured & backgroundUrl definitions
These were missing from our context, which caused interoperability issues with
people who do context processing
2024-04-01 13:39:38 +02:00
Erin Shepherd
464db9ea0b Don't list old accounts as aliases in WebFinger
Per the XRD specification:

> 2.4. Element <Alias>
>
> The <Alias> element contains a URI value that is an additional
> identifier for the resource described by the XRD. This value
> MUST be an absolute URI. The <Alias> element does not identify
> additional resources the XRD is describing, **but rather provides
> additional identifiers for the same resource.**

(http://docs.oasis-open.org/xri/xrd/v1.0/os/xrd-1.0-os.html#element.alias, emphasis mine)

In other words, the alias list is expected to link to things which are
not just semantically the same, but exactly the same. Old user accounts
don't do that

This change should not pose a compatibility issue: Mastodon does not
list old accounts here (See e1fcb02867/app/serializers/webfinger_serializer.rb (L12))

The use of as:alsoKnownAs is also not quite semantically right here
(see https://www.w3.org/TR/did-core/#dfn-alsoknownas, which defines
it to be used to refer to identifiers which are interchangable) but
that's what DID get for reusing a property definition that Mastodon
already squatted long before they got to it
2024-04-01 13:34:58 +02:00
FloatingGhost
2d439034ca Ensure that spoof-inserted does not time out 2024-03-30 12:55:22 +00:00
FloatingGhost
087d88f787 bump version 2024-03-30 11:45:07 +00:00
FloatingGhost
3650bb0370 Changelog entry 2024-03-30 11:44:34 +00:00
Oneric
ee7d98b093 Update Changelog 2024-03-29 08:35:15 -01:00
Oneric
0648d9ebaa Add mix tasks to detect spoofed posts and users
At least as far as we can
2024-03-26 16:05:20 -01:00
Oneric
d441101200 Add mix task to detect uploaded spoof payloads 2024-03-26 16:05:20 -01:00
Oneric
31f90bbb52 Register APNG MIME type
The newest git HEAD of MIME already knows about APNG, but this
hasn’t been released yet. Without this, APNG attachments from
remote posts won’t display as images in frontends.

Fixes: akkoma#657
2024-03-26 15:44:44 -01:00
Oneric
61ec592d66 Drop obsolete pixelfed workaround
This pixelfed issue was fixed in 2022-12 in
https://github.com/pixelfed/pixelfed/pull/3932

Co-authored-by: FloatingGhost <hannah@coffee-and-dreams.uk>
2024-03-26 15:11:06 -01:00
Oneric
8684964c5d Only allow exact id matches
This protects us from falling for obvious spoofs as from the current
upload exploit (unfortunately we can’t reasonably do anything about
spoofs with exact matches as was possible via emoji and proxy).

Such objects being invalid is supported by the spec, sepcifically
sections 3.1 and 3.2: https://www.w3.org/TR/activitypub/#obj-id

Anonymous objects are not relevant here (they can only exists within
parent objects iiuc) and neither is client-to-server or transient objects
(as those cannot be fetched in the first place).
This leaves us with the requirement for `id` to (a) exist and
(b) be a publicly dereferencable URI from the originating server.
This alone does not yet demand strict equivalence, but the spec then
further explains objects ought to be fetchable _via their ID_.
Meaning an object not retrievable via its ID, is invalid.

This reading is supported by the fact, e.g. GoToSocial (recently) and
Mastodon (for 6+ years) do already implement such strict ID checks,
additionally proving this doesn’t cause federation issues in practice.

However, apart from canonical IDs there can also be additional display
URLs. *omas first redirect those to their canonical location, but *keys
and Mastodon directly serve the AP representation without redirects.

Mastodon and GTS deal with this in two different ways,
but both constitute an effective countermeasure:
 - Mastodon:
   Unless it already is a known AP id, two fetches occur.
   The first fetch just reads the `id` property and then refetches from
   the id. The last fetch requires the returned id to exactly match the
   URL the content was fetched from. (This can be optimised by skipping
   the second fetch if it already matches)
   05eda8d193/app/helpers/jsonld_helper.rb (L168)
   63f0979799

 - GTS:
   Only does a single fetch and then checks if _either_ the id
   _or_ url property (which can be an object) match the original fetch
   URL. This relies on implementations always including their display URL
   as "url" if differing from the id. For actors this is true for all
   investigated implementations, for posts only Mastodon includes an
   "url", but it is also the only one with a differing display URL.
   2bafd7daf5 (diff-943bbb02c8ac74ac5dc5d20807e561dcdfaebdc3b62b10730f643a20ac23c24fR222)

Albeit Mastodon’s refetch offers higher compatibility with theoretical
implmentations using either multiple different display URL or not
denoting any of them as "url" at all, for now we chose to adopt a
GTS-like refetch-free approach to avoid additional implementation
concerns wrt to whether redirects should be allowed when fetching a
canonical AP id and potential for accidentally loosening some checks
(e.g. cross-domain refetches) for one of the fetches.
This may be reconsidered in the future.
2024-03-25 14:05:05 -01:00
Oneric
48b3a35793 Update user reference after fetch
Since we always followed redirects (and until recently allowed fuzzy id
matches), the ap_id of the received object might differ from the iniital
fetch url. This lead to us mistakenly trying to insert a new user with
the same nickname, ap_id, etc as an existing user (which will fail due
to uniqueness constraints) instead of updating the existing one.
2024-03-25 14:05:05 -01:00
Oneric
9061d148be Ensure object id doesn’t change on refetch 2024-03-25 14:05:05 -01:00
Oneric
3e134b07fa fetcher: return final URL after redirects from get_object
Since we reject cross-domain redirects, this doesn’t yet
make a difference, but it’s requried for stricter checking
subsequent commits will introduce.

To make sure (and in case we ever decide to reallow
cross-domain redirects) also use the final location
for containment and reachability checks.
2024-03-25 14:05:05 -01:00
Oneric
f07eb4cb55 Sanity check fetched user data
In order to properly process incoming notes we need
to be able to map the key id back to an actor.
Also, check collections actually belong to the same server.

Key ids of Hubzilla and Bridgy samples were updated to what
modern versions of those output. If anything still uses the
old format, we would not be able to verify their posts anyway.
2024-03-25 14:05:05 -01:00
Oneric
59a142e0b0 Never fetch resource from ourselves
If it’s not already in the database,
it must be counterfeit (or just not exists at all)

Changed test URLs were only ever used from "local: false" users anyway.
2024-03-25 14:05:05 -01:00
Oneric
fee57eb376 Move actor check into fetch_and_contain_remote_object_from_id
This brings it in line with its name and closes an,
in practice harmless, verification hole.

This was/is the only user of contain_origin making it
safe to change the behaviour on actor-less objects.

Until now refetched objects did not ensure the new actor matches the
domain of the object. We refetch polls occasionally to retrieve
up-to-date vote counts. A malicious AP server could have switched out
the poll after initial posting with a completely different post
attribute to an actor from another server.
While we indeed fell for this spoof before the commit,
it fortunately seems to have had no ill effect in practice,
since the asociated Create activity is not changed. When exposing the
actor via our REST API, we read this info from the activity not the
object.

This at first thought still keeps one avenue for exploit open though:
the updated actor can be from our own domain and a third server be
instructed to fetch the object from us. However this is foiled by an
id mismatch. By necessity of being fetchable and our longstanding
same-domain check, the id must still be from the attacker’s server.
Even the most barebone authenticity check is able to sus this out.
2024-03-25 14:05:05 -01:00
Oneric
c4cf4d7f0b Reject cross-domain redirects when fetching AP objects
Such redirects on AP queries seem most likely to be a spoofing attempt.
If the object is legit, the id should match the final domain anyway and
users can directly use the canonical URL.

The lack of such a check (and use of the initially queried domain’s
authority instead of the final domain) was enabling the current exploit
to even affect instances which already migrated away from a same-domain
upload/proxy setup in the past, but retained a redirect to not break old
attachments.

(In theory this redirect could, with some effort, have been limited to
 only old files, but common guides employed a catch-all redirect, which
 allows even future uploads to be reachable via an initial query to the
 main domain)

Same-domain redirects are valid and also used by ourselves,
e.g. for redirecting /notice/XXX to /objects/YYY.
2024-03-25 14:05:05 -01:00
Oneric
baaeffdebc Update spoofed activity test
Turns out we already had a test for activities spoofed via upload due
to an exploit several years. Back then *oma did not verify content-type
at all and doing so was the only adopted countermeasure.
Even the added test sample though suffered from a mismatching id, yet
nobody seems to have thought it a good idea to tighten id checks, huh

Since we will add stricter id checks later, make id and URL match
and also add a testcase for no content type at all. The new section
will be expanded in subsequent commits.
2024-03-25 14:05:05 -01:00
Oneric
2bcf633dc2 Document Pleroma.Object.Fetcher 2024-03-25 14:05:05 -01:00
Oneric
93ab6a018e mix: fix docs task 2024-03-18 22:40:43 -01:00
Oneric
c806adbfdb Refactor Fetcher.get_object for readability
Apart from slightly different error reasons wrt content-type,
this does not change functionality in any way.
2024-03-18 22:40:43 -01:00
Oneric
ddd79ff22d Proactively harden emoji pack against path traversal
No new path traversal attacks are known. But given the many entrypoints
and code flow complexity inside pack.ex, it unfortunately seems
possible a future refactor or addition might reintroduce one.
Furthermore, some old packs might still contain traversing path entries
which could trigger undesireable actions on rename or delete.

To ensure this can never happen, assert safety during path construction.

Path.safe_relative was introduced in Elixir 1.14, but
fortunately, we already require at least 1.14 anyway.
2024-03-18 22:33:10 -01:00
Oneric
d6d838cbe8 StealEmoji: check remote size before downloading
To save on bandwith and avoid OOMs with large files.
Ofc, this relies on the remote server
 (a) sending a content-length header and
 (b) being honest about the size.

Common fedi servers seem to provide the header and (b) at least raises
the required privilege of an malicious actor to a server infrastructure
admin of an explicitly allowed host.

A more complete defense which still works when faced with
a malicious server requires changes in upstream Finch;
see https://github.com/sneako/finch/issues/224
2024-03-18 22:33:10 -01:00
Oneric
6d003e1acd test/steal_emoji: consolidate configuration setup 2024-03-18 22:33:10 -01:00
Oneric
d1ce5fd911 test/steal_emoji: reduce code duplication with mock macro 2024-03-18 22:33:10 -01:00
Oneric
a4fa2ec9af StealEmoji: make final paths infeasible to predict
Certain attacks rely on predictable paths for their payloads.
If we weren’t so overly lax in our (id, URL) check, the current
counterfeit activity exploit would be one of those.
It seems plausible for future attacks to hinge on
or being made easier by predictable paths too.

In general, letting remote actors place arbitrary data at
a path within our domain of their choosing (sans prefix)
just doesn’t seem like a good idea.

Using fully random filenames would have worked as well, but this
is less friendly for admins checking emoji dirs.
The generated suffix should still be more than enough;
an attacker needs on average 140 trillion attempts to
correctly guess the final path.
2024-03-18 22:33:10 -01:00
Oneric
ee5ce87825 test: use pack functions to check for emoji
The hardocded path and filenames assumptions
will be broken with the next commit.
2024-03-18 22:33:10 -01:00
Oneric
d1c4d07404 Convert StealEmoji to pack.json
This will decouple filenames from shortcodes and
allow more image formats to work instead of only
those included in the auto-load glob. (Albeit we
still saved other formats to disk, wasting space)

Furthermore, this will allow us to make
final URL paths infeasible to predict.
2024-03-18 22:33:10 -01:00
Oneric
fa98b44acf Fill out path for newly created packs
Before this was only filled on loading the pack again,
preventing the created pack from being used directly.
2024-03-18 22:33:10 -01:00
Oneric
5b126567bb StealEmoji: drop superfluous basename
Since 3 commits ago we restrict shortcodes to a subset of
the POSIX Portable Filename Character Set, therefore
this can never have a directory component.
2024-03-18 22:33:10 -01:00
Oneric
a8c6c780b4 StealEmoji: use Content-Type and reject non-images
E.g. *key’s emoji URLs typically don’t have file extensions, but
until now we just slapped ".png" at its end hoping for the best.

Furthermore, this gives us a chance to actually reject non-images,
which before was not feasible exatly due to those extension-less URLs
2024-03-18 22:33:10 -01:00
Oneric
111cdb0d86 Split steal_emoji function for better readability 2024-03-18 22:33:10 -01:00
Norm
af041db6dc Limit emoji stealer to alphanum, dash, or underscore characters
As suggested in b387f4a1c1, only steal
emoji with alphanumerc, dash, or underscore characters.

Also consolidate all validation logic into a single function.

===

Taken from akkoma#703 with cosmetic tweaks

This matches our existing validation logic from Pleroma.Emoji,
and apart from excluding the dot also POSIX’s Portable Filename
Character Set making it always safe for use in filenames.

Mastodon is even stricter also disallowing U+002D HYPEN-MINUS
and requiring at least two characters.

Given both we and Mastodon reject shortcodes excluded
by this anyway, this doesn’t seem like a loss.
2024-03-18 22:33:10 -01:00
Oneric
fb54c47f0b Update example nginx config
To account for our subdomain recommendations
2024-03-18 22:33:10 -01:00
Oneric
fc36b04016 Drop media proxy same-domain default for base_url
Even more than with user uploads, a same-domain proxy setup bears
significant security risks due to serving untrusted content under
the main domain space.

A risky setup like that should never be the default.
2024-03-18 22:33:10 -01:00
Oneric
11ae8344eb Sanitise Content-Type of media proxy URLs
Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.

Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.

Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.

Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
2024-03-18 22:33:10 -01:00
Oneric
bcc528b2e2 Never automatically assign privileged content types
By mapping all extensions related to our custom privileged types
back to innocuous text/plain, our custom types will never automatically
be inserted which was one of the factors making impersonation possible.

Note, this does not invalidate the upload and emoji Content-Type
restrictions from previous commits. Apart from counterfeit AP objects
there are other payloads with standard types this protects against,
e.g. *.js Javascript payloads as used in prior frontend injections.
2024-03-18 22:33:10 -01:00
Oneric
e88d0a2853 Fix Content-Type of our schema
Strict servers fail to process anything from us otherwise.

Fixes: akkoma#716
2024-03-18 22:33:10 -01:00
Oneric
ba558c0c24 Limit instance emoji to image types
Else malicious emoji packs or our EmojiStealer MRF can
put payloads into the same domain as the instance itself.
Sanitising the content type should prevent proper clients
from acting on any potential payload.

Note, this does not affect the default emoji shipped with Akkoma
as they are handled by another plug. However, those are fully trusted
and thus not in needed of sanitisation.
2024-03-18 22:33:10 -01:00
Oneric
0ec62acb9d Always insert Dedupe upload filter
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.

Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.

While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.

Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.

Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
2024-03-18 22:33:10 -01:00
Oneric
fef773ca35 Drop media base_url default and recommend different domain
Same-domain setups enabled now at least two exploits,
so they ought to be discouraged and definitely not be the default.
2024-03-18 22:33:10 -01:00
Oneric
bdefbb8fd9 plug/upload_media: query config only once on init 2024-03-18 22:33:10 -01:00
Oneric
f7c9793542 Sanitise Content-Type of uploads
The lack thereof enables spoofing ActivityPub objects.

A malicious user could upload fake activities as attachments
and (if having access to remote search) trick local and remote
fedi instances into fetching and processing it as a valid object.

If uploads are hosted on the same domain as the instance itself,
it is possible for anyone with upload access to impersonate(!)
other users of the same instance.
If uploads are exclusively hosted on a different domain, even the most
basic check of domain of the object id and fetch url matching should
prevent impersonation. However, it may still be possible to trick
servers into accepting bogus users on the upload (sub)domain and bogus
notes attributed to such users.
Instances which later migrated to a different domain and have a
permissive redirect rule in place can still be vulnerable.
If — like Akkoma — the fetching server is overly permissive with
redirects, impersonation still works.

This was possible because Plug.Static also uses our custom
MIME type mappings used for actually authentic AP objects.

Provided external storage providers don’t somehow return ActivityStream
Content-Types on their own, instances using those are also safe against
their users being spoofed via uploads.

Akkoma instances using the OnlyMedia upload filter
cannot be exploited as a vector in this way — IF the
fetching server validates the Content-Type of
fetched objects (Akkoma itself does this already).

However, restricting uploads to only multimedia files may be a bit too
heavy-handed. Instead this commit will restrict the returned
Content-Type headers for user uploaded files to a safe subset, falling
back to generic 'application/octet-stream' for anything else.
This will also protect against non-AP payloads as e.g. used in
past frontend code injection attacks.

It’s a slight regression in user comfort, if say PDFs are uploaded,
but this trade-off seems fairly acceptable.

(Note, just excluding our own custom types would offer no protection
 against non-AP payloads and bear a (perhaps small) risk of a silent
 regression should MIME ever decide to add a canonical extension for
 ActivityPub objects)

Now, one might expect there to be other defence mechanisms
besides Content-Type preventing counterfeits from being accepted,
like e.g. validation of the queried URL and AP ID matching.
Inserting a self-reference into our uploads is hard, but unfortunately
*oma does not verify the id in such a way and happily accepts _anything_
from the same domain (without even considering redirects).
E.g. Sharkey (and possibly other *keys) seem to attempt to guard
against this by immediately refetching the object from its ID, but
this is easily circumvented by just uploading two payloads with the
ID of one linking to the other.

Unfortunately *oma is thus _both_ a vector for spoofing and
vulnerable to those spoof payloads, resulting in an easy way
to impersonate our users.

Similar flaws exists for emoji and media proxy.

Subsequent commits will fix this by rigorously sanitising
content types in more areas, hardening our checks, improving
the default config and discouraging insecure config options.
2024-03-18 22:33:10 -01:00
Sandra Snan
6116f81546
Don't strip newlines in the Atom feed 2024-03-11 12:50:14 +01:00
Oneric
7ef93c0b6d Add set_content_type to Plug.StaticNoCT 2024-03-04 17:50:20 +01:00
Oneric
dbb6091d01 Import copy of Plug.Static from Plug 1.15.3
The following commit will apply the needed patch
2024-03-04 17:50:20 +01:00
Oneric
5d467af6c5 Update notes on security exploit handling 2024-03-04 17:50:19 +01:00
Helge
5d89e0c917 Allow for url to be a list
This solves interoperability issues, see:
- https://git.pleroma.social/pleroma/pleroma/-/issues/3253
- https://socialhub.activitypub.rocks/t/fep-fffd-proxy-objects/3172/30?u=helge
- https://data.funfedi.dev/0.1.1/#url-parameter
2024-03-03 09:11:45 +01:00
Erin Shepherd
f18e2ba42c Refresh Users much more aggressively when processing Move activities
The default refresh interval of 1 day is woefully inadequate here;
users expect to be able to add the alias to their new account and
press the move button on their old account and have it work.

This allows callers to specify a maximum age before a refetch is
triggered. We set that to 5s for the move code, as a nice compromise
between Making Things Work and ensuring that this can't be used
to hammer a remote server
2024-02-29 21:14:53 +01:00
Oneric
fc95519dbf Allow fetching over IPv6
Mint/Finch disable IPv6 by default preventing us from
fetching anything from IPv6-only hosts without this.
2024-02-25 23:50:51 +01:00
FloatingGhost
889b57df82 2024.02 release 2024-02-24 13:54:21 +00:00
Weblate
34ffb92db4 Update translation files
Updated by "Squash Git commits" hook in Weblate.

Co-authored-by: Weblate <noreply@weblate.org>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-posix-errors/
Translation: Pleroma fe/Akkoma Backend (Posix Errors)
2024-02-24 13:42:59 +00:00
Weblate
c6dceb1802 Translated using Weblate (Polish)
Currently translated at 100.0% (47 of 47 strings)

Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: subtype <subtype@hollow.capital>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-posix-errors/pl/
Translation: Pleroma fe/Akkoma Backend (Posix Errors)
2024-02-24 13:42:59 +00:00
Weblate
caaf2deb22 Translated using Weblate (Polish)
Currently translated at 18.1% (183 of 1006 strings)

Translated using Weblate (Polish)

Currently translated at 6.6% (67 of 1006 strings)

Co-authored-by: Weblate <noreply@weblate.org>
Co-authored-by: subtype <subtype@hollow.capital>
Translate-URL: http://translate.akkoma.dev/projects/akkoma/akkoma-backend-config-descriptions/pl/
Translation: Pleroma fe/Akkoma Backend (Config Descriptions)
2024-02-24 13:42:59 +00:00
floatingghost
7d61fb0906 Merge pull request 'Fix static-fe Twitter metadata / URL previews' (#700) from Oneric/akkoma:staticfe-metadata into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/700
2024-02-24 13:42:55 +00:00
floatingghost
cdf73e0ac8 Merge pull request 'Better document database differences for Pleroma migrations' (#699) from Oneric/akkoma:doc_pleroma-migration-db into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/699
2024-02-24 04:33:43 +00:00
floatingghost
967e6b8ade Merge pull request 'Docs: Add description for mrf_reject_newly_created_account_notes' (#695) from YokaiRick/akkoma:doc_mrf_reject_acc_notes into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/695
2024-02-24 04:31:28 +00:00
Oneric
d7c8e9df27 static-fe: don’t squeeze non-square avatars
This will crop them to a square matching behaviour of Husky and *key
and allowing us to never worry about consistent alignment.
Note, akkoma-fe instead displays the full image with inserted spacing.
2024-02-23 23:39:44 +00:00
Oneric
a0daec6ea1 static-fe: don’t squeeze non-square emoji
Emoji and the navbar items want to let blend in with lines of text,
so fix their height and let the width adjust as needed.
2024-02-23 23:39:44 +00:00
Oneric
bff2812a93 More prominently document db migrations in migrations from Pleroma
By now most instance will run a version past 2022-08 but the guide
only documented it for from source installs and Pleroma develop.
2024-02-23 23:54:14 +01:00
Oneric
7964272c98 Document how to avoid data loss on migration from Pleroma 2024-02-23 23:54:09 +01:00
Oneric
c08f49d88e Add tests for static-fe metadata tags 2024-02-21 00:33:32 +00:00
FloatingGhost
3111181d3c mix format 2024-02-20 15:09:04 +00:00
floatingghost
2f9aad0e65 Merge pull request '[Security] StealEmojiPolicy: Sanitize shortcodes' (#701) from erincandescent/akkoma:stealemojipolicy-sanitize into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/701
2024-02-20 15:08:54 +00:00
Erin Shepherd
b387f4a1c1 Don't steal emoji who's shortcodes have dots or colons in their name
Mastodon at the very least seems to prevent the creation of emoji with
dots in their name (and refuses to accept them in federation). It feels
like being cautious in what we accept is reasonable here.

Colons are the emoji separator and so obviously should be blocked.

Perhaps instead of filtering out things like this we should just
do a regex match on `[a-zA-Z0-9_-]`? But that's plausibly a decision
for another day

    Perhaps we should also have a centralised "is this a valid emoji shortcode?"
    function
2024-02-20 11:33:55 +01:00
Haelwenn (lanodan) Monnier
7d94476dd6 StealEmojiPolicy: Sanitize shortcodes
Closes: https://git.pleroma.social/pleroma/pleroma/-/issues/3245
2024-02-20 11:19:00 +01:00
rick
c25cfe9b7a fixed spelling 2024-02-19 23:25:20 +01:00
Oneric
41dd37d796 doc/cheatsheet: add missing MRFs
Or mentions of MRFs in the main list
whose options were already documented.
2024-02-19 23:15:47 +01:00
Oneric
9830d54fa1 doc/cheatsheet: sort main MRF list alphabetically
It is too cumbersome to find a specific policy atm
or to check if all are docuemtned yet.
Trivial placeholder policies are excluded from this.
2024-02-19 23:15:30 +01:00
Oneric
f254e4f530 doc/cheatsheet: add missing MRF config detail docs
And remove “on by default” text from individual entries.
They are now laready in the “on by default” section.
2024-02-19 23:14:44 +01:00
Oneric
da4190c46e doc/cheatsheet: split out always active MRFs
It doesn’t make sense to add/remove them from the policies list
2024-02-19 23:14:24 +01:00
Oneric
7a2d68c3ab doc/cheatsheet: add link to ActivityExpiration config details 2024-02-19 23:14:07 +01:00
Oneric
8e7a89605d doc/cheatsheet: move MRF policies key to end of section
This makes it easier to spot the transparency options
2024-02-19 23:13:48 +01:00
Oneric
1640d19448 doc/cheatsheet: move :activitypub section ahead
Else it is too easy to mistake for another MRF policy.
2024-02-19 23:13:25 +01:00
Oneric
8f1776a8a7 Purge leftovers from FollowBot MRF
It was dropped in 9db4c2429f
2024-02-19 23:13:05 +01:00
Oneric
1ec6e193e6 doc: clarify RejectNewlyCreated uses local account discovery 2024-02-19 23:12:41 +01:00
Oneric
37e2a35b86 Fix Twitter metadata
This partly reverts 1d884fd914
while fixing both the issue it addressed and the issue it caused.

The above commit successfully fixed OpenGraph metadata tags
which until then always showed the user bio instead of post content
by handing the activities AP ID as url to the Metadata builder
_instead_ of passing the internal ID as activity_id.
However, in doing so the commit instead inflicted this very problem
onto Twitter metadata tags which ironically are used by akkoma-fe.

This is because while the OpenGraph builder wants an URL as url,
the Twitter builder needs the internal ID to build the URL to the
embedded player for videos and has no URL property.

Thanks to twpol for tracking down this root cause in #644.

Now, once identified the problem is simple, but this simplicity
invites multiple possible solutions to bikeshed about.

 1. Just pass both properties to the builder and let them pick

 2. Drop the url parameter from the OpenGraph builder and instead
     a) build static-fe URL of the post from the ID (like Twitter)
     b) use the passed-in object’s AP ID as an URL

Approach 2a has the disadvantage of hardcoding the expected URL outside
the router, which will be problematic should it ever change.
Approach 2b is conceptually similar to how the builder works atm.
However, the og:url is supposed to be a _permanent_ ID, by changing it
we might, afaiui, technically violate OpenGraph specs(?). (Though its
real-world consequence may very well be near non-existent.)

This leaves just approach 1, which this commit implements.
Albeit it too is not without nits to pick, as it leaves the metadata
builders with an inconsistent interface.

Additionally, this will resolve the subotpimal Discord previews for
content-less image posts reported in #664.
Discord already prefers OpenGraph metadata, so it’s mostly unaffected.
However, it appears when encountering an explicitly empty OpenGraph
description and a non-empty Twitter description, it replaces just the
empty field with its Twitter counterpart, resulting in the user’s bio
slipping into the preview.
Secondly, regardless of any OpenGraph tags, Discord uses twitter:card to
decide how prominently images should be, but due to the bug the card
type was stuck as "summary", forcing images to always remain small.

Root cause identified by: twpol

Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/644
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/664
2024-02-19 21:09:43 +00:00
floatingghost
086d6100e1 Merge pull request 'Disable busy waits in the default OTP vm.args configuration.' (#693) from erincandescent/akkoma:otp-tune-vm-busywait into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/693
2024-02-19 14:01:14 +00:00
floatingghost
3e24210e9f Merge pull request 'Prune old Update activities' (#683) from Oneric/akkoma:db-prune-old-updates into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/683
2024-02-19 13:59:33 +00:00
floatingghost
551ae69541 Merge pull request 'Fix and provide sane defaults for SMTP' (#686) from Oneric/akkoma:smtp-defaults into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/686
2024-02-19 13:39:15 +00:00
YokaiRick
37f9626116 Merge pull request 'Docs: reword description for mrf_reject_newly_created_account_notes for more clarity' (#1) from stefan230/akkoma:doc_mrf_reject_acc_notes_patch into doc_mrf_reject_acc_notes
Reviewed-on: https://akkoma.dev/YokaiRick/akkoma/pulls/1
2024-02-17 22:19:32 +00:00
stefan230
b4c832471c docs/docs/configuration/cheatsheet.md aktualisiert
fixed up some grammer / wording. removed a setence and made wording more in line with what I could find in Admin-FE (especially wording of "rejecting" vs. dropping)
2024-02-17 22:09:47 +00:00
rick
db49daa4a5 make it clearer what it affects 2024-02-17 22:57:56 +01:00
rick
718104117f fix link 2024-02-17 22:34:55 +01:00
rick
12e7d0a25c added doc for mrf_reject_newly_created_account_notes 2024-02-17 22:25:12 +01:00
Oneric
1a7839eaf2 Prune old Update activities
Once processed they serve no purpose anymore afaict.
Therefor, lets prune them like other transient activities
to not unnecessarily bloat the table.
2024-02-17 16:57:40 +01:00
Oneric
1ef8b967d2 test: fix typos affecting remove factory
Apparently nothing used this factory until now
2024-02-17 16:57:40 +01:00
Erin Shepherd
7a0e27a746 Disable busy waits in the default OTP vm.args configuration.
This vastly reduces idle CPU usage, which should generally be beneficial
for most small-to-medium sized instances.

Additionally update the documentation to specify how to override the vm.args
file for OTP installs
2024-02-17 13:21:56 +01:00
floatingghost
755c75d8a4 Merge pull request 'Clean up warnings (+fallback metrics)' (#685) from Oneric/akkoma:metrics into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/685
2024-02-17 11:41:10 +00:00
floatingghost
289f93f5a2 Merge pull request 'Return last_status_at as date, not datetime' (#681) from katafrakt/akkoma:fix-last-status-at into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/681
2024-02-17 11:37:19 +00:00
floatingghost
371b258c99 Merge pull request 'Fix SimplePolicy blocking account updates' (#692) from Oneric/akkoma:fix-background_removal into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/692
2024-02-17 10:34:16 +00:00
Oneric
3b0714c4fd Fix SimplePolicy blocking account updates
This fixes an oversight in e99e2407f3
which added background_removal as a possible SimplePolicy setting.
However, it did _not_ add a default value to the base config and
as it turns out instance_list doesn’t handle unset options well.

In effect this caused federating instances with SimplePolicy enabled
but background_removal not explicitly configured to always trip up for
outgoing account updates in check_background_removal (and incoming
updates from Sharkey).
For added ""fun"" this error was able to block account updates made
e.g. via /api/v1/accounts/update_credentials.

Tests were unaffected since they explicitly override
all relevant config options.

Set a default to avoid all this
(note to self: don’t forget next time, baka!)
2024-02-17 03:10:05 +01:00
floatingghost
34c213f02f Merge pull request 'Federate user profile background' (#682) from Oneric/akkoma:background-federation into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/682
2024-02-16 21:00:10 +00:00
Oneric
e99e2407f3 Add background_removal to SimplePolicy MRF 2024-02-16 16:36:45 +01:00
Oneric
7622aa27ca Federate user profile background
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.

Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.

Ref.: 4e64397635
2024-02-16 16:35:51 +01:00
FloatingGhost
0ed815b8a1 Merge branch 'followback' into develop 2024-02-16 13:27:40 +00:00
floatingghost
c5dcd07e08 Merge pull request 'Fix OpenAPI spec for preferred_frontend endpoint' (#680) from katafrakt/akkoma:fix-openapi-spec-for-preferred-frontend into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/680
2024-02-16 12:21:00 +00:00
floatingghost
874ee73a87 Merge pull request 'Document Akkoma API' (#678) from Oneric/akkoma:doc-akkomapi into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/678
2024-02-16 12:20:11 +00:00
floatingghost
a905223837 Merge pull request 'Check permissions on configuration file, not symlink' (#687) from erincandescent/akkoma:config-stat-symlink into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/687
2024-02-16 12:19:08 +00:00
Oneric
cda597a05c doc: fix Akkoma identification name
Akkoma stopped pretending to be Pleroma here when the mix project name
was changed in c07fcdbf2b.
2024-02-15 16:25:59 +01:00
Oneric
711043f57d Document bubble timeline API
It was added in cb6e7359af.
2024-02-15 16:04:33 +01:00
Oneric
6bb455702d Document Akkoma API 2024-02-15 16:04:33 +01:00
Oneric
7493d8f49d Document live dashboard 2024-02-15 16:04:33 +01:00
Haelwenn (lanodan) Monnier
cb7eaccecb Config: Check the permissions of the linked file instead of the symlink↵ 2024-02-14 18:30:27 +01:00
Oneric
376f6b15ca Add ability to auto-approve followbacks
Resolves: https://akkoma.dev/AkkomaGang/akkoma/issues/148
2024-02-13 15:42:37 +01:00
Oneric
13e62b4e51 Fix schema and docs for status_ttl_days and instance
Fixes misspelling and omission of and example in commit
0cfd5b4e89 which added the
status_ttl_property. This was the only place this commit
referred to the property as note_ttl_days.

Partially fixes the omitted schema update of the instance metadata addition
from commit b7e8ce2350. A proper full schema
for nodeinfo is still missing.
2024-02-13 15:39:52 +01:00
floatingghost
6fde75e1f0 Merge pull request 'Purge leftovers from chats' (#684) from Oneric/akkoma:cosmetic-purge-chat into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/684
2024-02-13 09:13:37 +00:00
Oneric
192480093c Provide sane defaults for SMTP
OTP’s default SSL/TLS settings are rather restricitive
and in particular do not use system CA certs.
In our case using system CA certs is virtually always desired
and the lack of it leads to non-obvious errors. Manually configuring
system CA certs from in-database config also isn’t straightforward.

Furthermore, gen_smtp uses a different set of connection options
for direct SSL/TLS and a later TLS upgrade providing additional
confusion and complexity in how to configure this.

Thus provide some suitable defaults for sending SMTP emails.
Everything can still be overriden by admins if necessary.

Note: defaults are not appended when validating the config
in hopes of improving the error message (as the required relay key
is already accessed to generate defaults for optional fields)

Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/660
2024-02-12 22:45:57 +01:00
Oneric
29f564f700 Use fallbacks of summary metrics for prometheus 2024-02-12 02:00:09 +01:00
Oneric
16197ff57a Display memory as MB in live dashboard
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.

Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.
2024-02-12 02:00:09 +01:00
Oneric
8f8e1ff214 Purge unused function scrub_css
Commit e9f1897cfd added this private
function but it never had any users resulting in warnings each startup
2024-02-12 02:00:09 +01:00
Oneric
18ecae6183 Use fully qualified function capture for telementry event
Otherwise we get warnings on startup as local captures
and anonymous functions are supposedly less performant.
2024-02-12 01:59:18 +01:00
Oneric
a6df71eebb Don't add summary metrics to prometheus
The exporter doesn’t support them thus we don't lose anything by this,
but it avoids a bunch of warnings each time the server starts up.
2024-02-12 01:59:18 +01:00
Oneric
8cf183cb42 Drop Chat tables
Chats were removed in 0f132b802d
2024-02-11 05:15:08 +01:00
Oneric
5f7d47dcb7 Drop obolete chat/shoutbox config options
Their functions were purged in 0f132b802d
2024-02-11 05:15:02 +01:00
Paweł Świątkowski
df21b61829
Return last_status_at as date, not datetime 2024-02-05 21:42:15 +01:00
floatingghost
e97d08ee98 Merge pull request 'MRF transparency: don’t forget to obfuscate short domains' (#676) from Oneric/akkoma:mrf-obfuscation into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/676
2024-02-05 08:43:43 +00:00
Paweł Świątkowski
d7d159c49f
Fix OpenAPI spec for preferred_frontend endpoint
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
2024-02-03 14:27:45 +01:00
Oneric
3cd882528e More prominently document MRF transparency and obfuscation
And point to the cheat sheet for all other MRF policies
and their configuration details.
2024-02-02 14:50:21 +00:00
Oneric
e47c50666d Fix obfuscation of short domains
Fixes https://akkoma.dev/AkkomaGang/akkoma/issues/645
2024-02-02 14:50:13 +00:00
floatingghost
b4ccddab39 Merge pull request 'Fix OAuth consumer mode' (#668) from tcmal/akkoma:develop into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/668
2024-02-02 10:05:42 +00:00
Aria
77000b8ffd update tests for oauth consumer 2023-12-17 21:48:19 +00:00
Aria
a074be24ca add bit about frontend configuration to oauth consumer docs 2023-12-17 19:36:27 +00:00
Aria
eb0dbf6b79 fix oauth consumer mode
the previous code passed a state parameter to ueberauth with info
about where to go after the user logged in, etc.
since ueberauth 0.7, this parameter is ignored and oauth state is used
for actual CSRF reasons.

we now set a cookie with the state we need to keep track of, and read
it once the callback happens.
2023-12-17 19:27:36 +00:00
Aria
e2f749b5b0 don't select ueberauth 0.10.6, as it is broken
see https://github.com/ueberauth/ueberauth/issues/194
2023-12-17 18:59:31 +00:00
FloatingGhost
6fb91d79f3 bump deps 2023-12-15 16:32:53 +00:00
FloatingGhost
2858cd81e1 Move changelog into our format 2023-12-15 16:32:41 +00:00
Lain Soykaf
c3098e9c56 UserViewTest: Add basice service actor test. 2023-12-15 16:31:51 +00:00
Yonle
8a0e797cff ap userview: add outbox field.
Signed-off-by: Yonle <yonle@lecturify.net>
2023-12-15 16:31:51 +00:00
FloatingGhost
74d5e22fc5 fix robotstxt on OTP 2023-12-15 16:23:20 +00:00
floatingghost
bc22ea50ab Merge pull request 'docs: Fixed wrong command for robots_txt CLI task' (#632) from yukijoou/akkoma:docs-robotstxt-fix into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/632
2023-12-15 16:21:17 +00:00
floatingghost
8ae5364886 Merge pull request 'Add shm_size to the Database container' (#634) from EpicKitty/akkoma:develop into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/634
2023-12-15 16:20:45 +00:00
FloatingGhost
6cc523bd23 Correct email links to be absolute URLs 2023-11-02 11:49:03 +00:00
FloatingGhost
fb700a956a correct link 2023-11-02 11:40:19 +00:00
floatingghost
c12d158491 Merge pull request 'Add more image mimetypes to reverse proxy' (#658) from Seirdy/akkoma:moar-image-types into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/658
2023-11-02 11:38:40 +00:00
floatingghost
ed5c930dd9 Merge pull request 'Docs: Add note about Docker installations in backup section' (#631) from y0nei/akkoma:develop into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/631
2023-11-02 11:04:39 +00:00
floatingghost
3cca953c58 Merge pull request 'added support for arm64 in pleroma_ctl' (#630) from YokaiRick/akkoma:arm64-pleroma_ctl into develop
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/630
2023-11-02 11:03:22 +00:00
Rohan Kumar
36f4f18aa5
Add more image mimetypes to reverse proxy
Add JPEG-XL, AVIF, and WebP support to the reverse proxy. All three are
supported in WebKit browsers; the latter two are supported in Gecko and
Blink.
2023-11-01 17:47:52 -07:00
FloatingGhost
033b7b04e0 update captcha version 2023-10-20 13:30:29 +01:00
FloatingGhost
d1af78aba1 changelog 2023-09-15 12:00:45 +01:00
FloatingGhost
3e7446d177 Add various both-ugc-and-tag setups 2023-09-15 11:58:56 +01:00
FloatingGhost
c8e08e9cc3 fix issue with API cascading domain blocks but not honouring them 2023-08-25 11:00:49 +01:00
Koneko Toujou
1b9edcba64 Add shm_size to the Database container 2023-08-21 21:50:10 +00:00
yuki joou
32422a7a04 docs: Fixed wrong command for robots_txt CLI task
This is according to the error message displayed when trying to run the
command in the current version of the docs
2023-08-18 13:25:52 +00:00
y0nei
0617090743
Note about Docker installations in backup section 2023-08-17 16:51:53 +02:00
YokaiRick
6ec5437294 added support for arm64
added arm64 support for update.
Tested on Arch amd64, Debian arm64 and Alpine amd64
2023-08-16 20:58:21 +00:00
Norm
0cb3812ac0
Add docker override file to docs and gitignore
The docker-compose.yml file is likely to be edited quite extensively by
admins when setting up an instance. This would likely cause problems
when dealing with updating Akkoma as merge conflicts would likely occur.

Docker-compose already has the ability to use override files in addition
to the main `docker-compose.yml` file. Admins can instead put any
overrides (additional volumes, container for elasticsearch, etc.) into a
file that won't be tracked by git and thus won't run into merge
conflicts in the future. In particular, the
`docker-compose.override.yml` will be checked by docker compose in
addition to the main file if it exists and override definitions from the
latter with the former.
2023-08-07 13:09:04 -04:00
ilja
3947012691 Fix warnings
There were two warnings, these are now fixed.

I moved the fonts folder into the css folder. Antother option was to change the relative path,
but it seems that after changing it in the css file, the path got changed back when rebuilding the site.
Maybe it needs to be changed somewhere else, idk, this worked.
2023-05-29 09:10:07 +02:00
ilja
d61b7d4b49 Improve backup restore
CREATE DATABASE was running in a transaction block with CREATE USER. This isn't allowed (any more?).
This is now two separate commands.

I also did some other touch-ups including
* making it OTP-first,
* add backup of static directory because this contains e.g. custom emoji, and
* remove the suggestion for using the setup_db.psql file. The reason is because I fear it causes more confusion than what it's worth.
    * Firstly, OTP installations won't have this file because it's created in /tmp.
    * Secondly, the instance has been reinstalled and thus a new setup_db.psql with different password may have been created, causing only more confusion.
2023-05-29 09:09:56 +02:00
Ilja
66a04cead3 Descriptions from exif data with only whitespeces are considered empty
I noticed that pictures taken with Ubuntu-Touch have whitespace in one of the fields
This should just be ignored imo
2022-10-23 14:46:22 +02:00
Ilja
f50cffd134 update moduledoc 2022-10-23 14:46:22 +02:00
Ilja
338612d72b Use EXIF data of image to prefill image description
During attachment upload Pleroma returns a "description" field.

* This MR allows Pleroma to read the EXIF data during upload and return the description to the FE using this field.
    * If a description is already present (e.g. because a previous module added it), it will use that
    * Otherwise it will read from the EXIF data. First it will check -ImageDescription, if that's empty, it will check -iptc:Caption-Abstract
    * If no description is found, it will simply return nil, which is the default value
* When people set up a new instance, they will be asked if they want to read metadata and this module will be activated if so

There was an Exiftool module, which has now been renamed to Exiftool.StripLocation
2022-10-23 14:46:16 +02:00
332 changed files with 47288 additions and 23111 deletions

View file

@ -1 +0,0 @@
https://github.com/hashnuke/heroku-buildpack-elixir

1
.gitignore vendored
View file

@ -78,3 +78,4 @@ docs/venv
# docker stuff # docker stuff
docker-db docker-db
*.iml *.iml
docker-compose.override.yml

View file

@ -1,4 +1,5 @@
platform: linux/amd64 labels:
platform: linux/amd64
depends_on: depends_on:
- test - test
@ -34,7 +35,7 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)" - &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean" - &mix-clean "mix deps.clean --all && mix clean"
pipeline: steps:
# Canonical amd64 # Canonical amd64
debian-bookworm: debian-bookworm:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612 image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612

View file

@ -1,4 +1,5 @@
platform: linux/arm64 labels:
platform: linux/aarch64
depends_on: depends_on:
- test - test
@ -34,7 +35,7 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)" - &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean" - &mix-clean "mix deps.clean --all && mix clean"
pipeline: steps:
# Canonical arm64 # Canonical arm64
debian-bookworm: debian-bookworm:
image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612 image: hexpm/elixir:1.15.4-erlang-26.0.2-debian-bookworm-20230612

View file

@ -1,4 +1,5 @@
platform: linux/amd64 labels:
platform: linux/amd64
depends_on: depends_on:
- test - test
@ -45,7 +46,7 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)" - &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean" - &mix-clean "mix deps.clean --all && mix clean"
pipeline: steps:
docs: docs:
<<: *on-point-release <<: *on-point-release
secrets: secrets:

View file

@ -1,4 +1,5 @@
platform: linux/amd64 labels:
platform: linux/amd64
variables: variables:
- &scw-secrets - &scw-secrets
@ -41,9 +42,9 @@ variables:
- &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)" - &clean "(rm -rf release || true) && (rm -rf _build || true) && (rm -rf /root/.mix)"
- &mix-clean "mix deps.clean --all && mix clean" - &mix-clean "mix deps.clean --all && mix clean"
pipeline: steps:
lint: lint:
image: akkoma/ci-base:1.15-otp26 image: akkoma/ci-base:1.16-otp26
<<: *on-pr-open <<: *on-pr-open
environment: environment:
MIX_ENV: test MIX_ENV: test

View file

@ -1,4 +1,5 @@
platform: linux/amd64 labels:
platform: linux/amd64
depends_on: depends_on:
- lint - lint
@ -7,15 +8,12 @@ matrix:
ELIXIR_VERSION: ELIXIR_VERSION:
- 1.14 - 1.14
- 1.15 - 1.15
- 1.16
OTP_VERSION: OTP_VERSION:
- 25 - 25
- 26 - 26
include: include:
- ELIXIR_VERSION: 1.14 - ELIXIR_VERSION: 1.16
OTP_VERSION: 25
- ELIXIR_VERSION: 1.15
OTP_VERSION: 25
- ELIXIR_VERSION: 1.15
OTP_VERSION: 26 OTP_VERSION: 26
variables: variables:
@ -70,7 +68,7 @@ services:
POSTGRES_USER: postgres POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres POSTGRES_PASSWORD: postgres
pipeline: steps:
test: test:
image: akkoma/ci-base:${ELIXIR_VERSION}-otp${OTP_VERSION} image: akkoma/ci-base:${ELIXIR_VERSION}-otp${OTP_VERSION}
<<: *on-pr-open <<: *on-pr-open

View file

@ -4,22 +4,128 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/). The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Unreleased ## UNRELEASED
## BREAKING
- Minimum PostgreSQL version is raised to 12
## Added
- Implement [FEP-67ff](https://codeberg.org/fediverse/fep/src/branch/main/fep/67ff/fep-67ff.md) (federation documentation)
- Meilisearch: it is now possible to use separate keys for search and admin actions
- New standalone `prune_orphaned_activities` mix task with configurable batch limit
- The `prune_objects` mix task now accepts a `--limit` parameter for initial object pruning
## Fixed
- Meilisearch: order of results returned from our REST API now actually matches how Meilisearch ranks results
## Changed
- Refactored Rich Media to cache the content in the database. Fetching operations that could block status rendering have been eliminated.
## 2024.04.1 (Security)
## Fixed
- Issue allowing non-owners to use media objects in posts
- Issue allowing use of non-media objects as attachments and crashing timeline rendering
- Issue allowing webfinger spoofing in certain situations
## 2024.04
## Added
- Support for [FEP-fffd](https://codeberg.org/fediverse/fep/src/branch/main/fep/fffd/fep-fffd.md) (proxy objects)
- Verified support for elixir 1.16
- Uploadfilter `Pleroma.Upload.Filter.Exiftool.ReadDescription` returns description values to the FE so they can pre fill the image description field
NOTE: this filter MUST be placed before `Exiftool.StripMetadata` to work
## Changed
- Inbound pipeline error handing was modified somewhat, which should lead to less incomprehensible log spam. Hopefully.
- Uploadfilter `Pleroma.Upload.Filter.Exiftool` was replaced by `Pleroma.Upload.Filter.Exiftool.StripMetadata`;
the latter strips all non-essential metadata by default but can be configured.
To regain the old behaviour of only stripping GPS data set `purge: ["gps:all"]`.
- Uploadfilter `Pleroma.Upload.Filter.Exiftool` has been renamed to `Pleroma.Upload.Filter.Exiftool.StripMetadata`
- MRF.InlineQuotePolicy now prefers to insert display URLs instead of ActivityPub IDs
- Old accounts are no longer listed in WebFinger as aliases; this was breaking spec
## Fixed
- Issue preventing fetching anything from IPv6-only instances
- Issue allowing post content to leak via opengraph tags despite :estrict\_unauthenticated being set
- Move activities no longer operate on stale user data
- Missing definitions in our JSON-LD context
- Issue mangling newlines in code blocks for RSS/Atom feeds
- static\_fe squeezing non-square avatars and emoji
- Issue leading to properly JSON-LD compacted emoji reactions being rejected
- We now use a standard-compliant Accept header when fetching ActivityPub objects
- /api/pleroma/notification\_settings was rejecting body parameters;
this also broke changing this setting via akkoma-fe
- Issue leading to Mastodon bot accounts being rejected
- Scope misdetection of remote posts resulting from not recognising
JSON-LD-compacted forms of public scope; affected e.g. federation with bovine
- Ratelimits encountered when fetching objects are now respected; 429 responses will cause a backoff when we get one.
## Removed
- ActivityPub Client-To-Server write API endpoints have been disabled;
read endpoints are planned to be removed next release unless a clear need is demonstrated
## 2024.03
## Added
- CLI tasks best-effort checking for past abuse of the recent spoofing exploit
- new `:mrf_steal_emoji, :download_unknown_size` option; defaults to `false`
## Changed
- `Pleroma.Upload, :base_url` now MUST be configured explicitly if used;
use of the same domain as the instance is **strongly** discouraged
- `:media_proxy, :base_url` now MUST be configured explicitly if used;
use of the same domain as the instance is **strongly** discouraged
- StealEmoji:
- now uses the pack.json format;
existing users must migrate with an out-of-band script (check release notes)
- only steals shortcodes recognised as valid
- URLs of stolen emoji is no longer predictable
- The `Dedupe` upload filter is now always active;
`AnonymizeFilenames` is again opt-in
- received AP data is sanity checked before we attempt to parse it as a user
- Uploads, emoji and media proxy now restrict Content-Type headers to a safe subset
- Akkoma will no longer fetch and parse objects hosted on the same domain
## Fixed
- Critical security issue allowing Akkoma to be used as a vector for
(depending on configuration) impersonation of other users or creation
of bogus users and posts on the upload domain
- Critical security issue letting Akkoma fall for the above impersonation
payloads due to lack of strict id checking
- Critical security issue allowing domains redirect to to pose as the initial domain
(e.g. with media proxy's fallback redirects)
- refetched objects can no longer attribute themselves to third-party actors
(this had no externally visible effect since actor info is read from the Create activity)
- our litepub JSON-LD schema is now served with the correct content type
- remote APNG attachments are now recognised as images
## Upgrade Notes
- As mentioned in "Changed", `Pleroma.Upload, :base_url` **MUST** be configured. Uploads will fail without it.
- Akkoma will refuse to start if this is not set.
- Same with media proxy.
## 2024.02
## Added ## Added
- Full compatibility with Erlang OTP26 - Full compatibility with Erlang OTP26
- handling of GET /api/v1/preferences - handling of GET /api/v1/preferences
- Explicit listing of config keys that are allowed to be set by the database - Akkoma API is now documented
- Previously set config keys will still be loaded, but you will get a warning - ability to auto-approve follow requests from users you are already following
that they probably should not be dynamically configured. - The SimplePolicy MRF can now strip user backgrounds from selected remote hosts
## Changed ## Changed
- OTP builds are now built on erlang OTP26 - OTP builds are now built on erlang OTP26
- The base Phoenix framework is now updated to 1.7 - The base Phoenix framework is now updated to 1.7
- An `outbox` field has been added to actor profiles to comply with AP spec
- User profile backgrounds do now federate with other Akkoma instances and Sharkey
## Fixed ## Fixed
- Documentation issue in which a non-existing nginx file was referenced - Documentation issue in which a non-existing nginx file was referenced
- Issue where a bad inbox URL could break federation - Issue where a bad inbox URL could break federation
- Issue where hashtag rel values would be scrubbed
- Issue where short domains listed in `transparency_obfuscate_domains` were not actually obfuscated
## 2023.08 ## 2023.08

42
FEDERATION.md Normal file
View file

@ -0,0 +1,42 @@
# Federation
## Supported federation protocols and standards
- [ActivityPub](https://www.w3.org/TR/activitypub/) (Server-to-Server)
- [WebFinger](https://webfinger.net/)
- [Http Signatures](https://datatracker.ietf.org/doc/html/draft-cavage-http-signatures)
- [NodeInfo](https://nodeinfo.diaspora.software/)
## Supported FEPs
- [FEP-67ff: FEDERATION](https://codeberg.org/fediverse/fep/src/branch/main/fep/67ff/fep-67ff.md)
- [FEP-f1d5: NodeInfo in Fediverse Software](https://codeberg.org/fediverse/fep/src/branch/main/fep/f1d5/fep-f1d5.md)
- [FEP-fffd: Proxy Objects](https://codeberg.org/fediverse/fep/src/branch/main/fep/fffd/fep-fffd.md)
## ActivityPub
Akkoma mostly follows the server-to-server parts of the ActivityPub standard,
but implements quirks for Mastodon compatibility as well as Mastodon-specific
and custom extensions.
See our documentation and Mastodons federation information
linked further below for details on these quirks and extensions.
Akkoma does not perform JSON-LD processing.
### Required extensions
#### HTTP Signatures
All AP S2S POST requests to Akkoma instances MUST be signed.
Depending on instance configuration the same may be true for GET requests.
## Nodeinfo
Akkoma provides many additional entries in its nodeinfo response,
see the documentation linked below for details.
## Additional documentation
- [Akkomas ActivityPub extensions](https://docs.akkoma.dev/develop/development/ap_extensions/)
- [Akkomas nodeinfo extensions](https://docs.akkoma.dev/develop/development/nodeinfo_extensions/)
- [Mastodons federation requirements](https://github.com/mastodon/mastodon/blob/main/FEDERATION.md)

View file

@ -1,2 +0,0 @@
web: mix phx.server
release: mix ecto.migrate

View file

@ -1,16 +1,21 @@
# Pleroma backend security policy # Akkoma backend security handling
## Supported versions
Currently, Pleroma offers bugfixes and security patches only for the latest minor release.
| Version | Support
|---------| --------
| 2.2 | Bugfixes and security patches
## Reporting a vulnerability ## Reporting a vulnerability
Please use confidential issues (tick the "This issue is confidential and should only be visible to team members with at least Reporter access." box when submitting) at our [bugtracker](https://git.pleroma.social/pleroma/pleroma/-/issues/new) for reporting vulnerabilities. Please send an email (preferably encrypted) or
a DM via our IRC to one of the following people:
| Forgejo nick | IRC nick | Email | GPG |
| ------------ | ------------- | ------------- | --------------------------------------- |
| floatinghost | FloatingGhost | *see GPG key* | https://coffee-and-dreams.uk/pubkey.asc |
## Announcements ## Announcements
New releases are announced at [pleroma.social](https://pleroma.social/announcements/). All security releases are tagged with ["Security"](https://pleroma.social/announcements/tags/security/). You can be notified of them by subscribing to an Atom feed at <https://pleroma.social/announcements/tags/security/feed.xml>. New releases and security issues are announced at
[meta.akkoma.dev](https://meta.akkoma.dev/c/releases) and
[@akkoma@ihatebeinga.live](https://ihatebeinga.live/akkoma).
Both also offer RSS feeds
([meta](https://meta.akkoma.dev/c/releases/7.rss),
[fedi](https://ihatebeinga.live/users/akkoma.rss))
so you can keep an eye on it without any accounts.

View file

@ -49,9 +49,7 @@ config :pleroma, ecto_repos: [Pleroma.Repo]
config :pleroma, Pleroma.Repo, config :pleroma, Pleroma.Repo,
telemetry_event: [Pleroma.Repo.Instrumenter], telemetry_event: [Pleroma.Repo.Instrumenter],
queue_target: 20_000, queue_target: 20_000,
migration_lock: nil, migration_lock: nil
parameters: [gin_fuzzy_search_limit: "500"],
prepare: :unnamed
config :pleroma, Pleroma.Captcha, config :pleroma, Pleroma.Captcha,
enabled: true, enabled: true,
@ -63,11 +61,11 @@ config :pleroma, Pleroma.Captcha.Kocaptcha, endpoint: "https://captcha.kotobank.
# Upload configuration # Upload configuration
config :pleroma, Pleroma.Upload, config :pleroma, Pleroma.Upload,
uploader: Pleroma.Uploaders.Local, uploader: Pleroma.Uploaders.Local,
filters: [Pleroma.Upload.Filter.Dedupe], filters: [],
link_name: false, link_name: false,
proxy_remote: false,
filename_display_max_length: 30, filename_display_max_length: 30,
base_url: nil base_url: nil,
allowed_mime_types: ["image", "audio", "video"]
config :pleroma, Pleroma.Uploaders.Local, uploads: "uploads" config :pleroma, Pleroma.Uploaders.Local, uploads: "uploads"
@ -77,7 +75,6 @@ config :pleroma, Pleroma.Uploaders.S3,
truncated_namespace: nil, truncated_namespace: nil,
streaming_enabled: true streaming_enabled: true
# This cannot be configured in the database!
config :ex_aws, :s3, config :ex_aws, :s3,
# host: "s3.wasabisys.com", # required if not Amazon AWS # host: "s3.wasabisys.com", # required if not Amazon AWS
access_key_id: nil, access_key_id: nil,
@ -151,18 +148,38 @@ config :logger, :ex_syslogger,
format: "$metadata[$level] $message", format: "$metadata[$level] $message",
metadata: [:request_id] metadata: [:request_id]
# ———————————————————————————————————————————————————————————————
# W A R N I N G
# ———————————————————————————————————————————————————————————————
#
# Whenever adding a privileged new custom type for e.g.
# ActivityPub objects, ALWAYS map their extension back
# to "application/octet-stream".
# Else files served by us can automatically end up with
# those privileged types causing severe security hazards.
# (We need those mappings so Phoenix can assoiate its format
# (the "extension") to incoming requests of those MIME types)
#
# ———————————————————————————————————————————————————————————————
config :mime, :types, %{ config :mime, :types, %{
"application/xml" => ["xml"], "application/xml" => ["xml"],
"application/xrd+xml" => ["xrd+xml"], "application/xrd+xml" => ["xrd+xml"],
"application/jrd+json" => ["jrd+json"], "application/jrd+json" => ["jrd+json"],
"application/activity+json" => ["activity+json"], "application/activity+json" => ["activity+json"],
"application/ld+json" => ["activity+json"] "application/ld+json" => ["activity+json"],
# Can be removed when bumping MIME past 2.0.5
# see https://akkoma.dev/AkkomaGang/akkoma/issues/657
"image/apng" => ["apng"]
} }
config :mime, :extensions, %{ config :mime, :extensions, %{
"activity+json" => "application/activity+json" "xrd+xml" => "text/plain",
"jrd+json" => "text/plain",
"activity+json" => "text/plain"
} }
# ———————————————————————————————————————————————————————————————
config :tesla, :adapter, {Tesla.Adapter.Finch, name: MyFinch} config :tesla, :adapter, {Tesla.Adapter.Finch, name: MyFinch}
# Configures http settings, upstream proxy etc. # Configures http settings, upstream proxy etc.
@ -171,8 +188,10 @@ config :pleroma, :http,
receive_timeout: :timer.seconds(15), receive_timeout: :timer.seconds(15),
proxy_url: nil, proxy_url: nil,
user_agent: :default, user_agent: :default,
pool_size: 50, pool_size: 10,
adapter: [] adapter: [],
# see: https://hexdocs.pm/finch/Finch.html#start_link/1
pool_max_idle_time: :timer.seconds(30)
config :pleroma, :instance, config :pleroma, :instance,
name: "Akkoma", name: "Akkoma",
@ -293,7 +312,6 @@ config :pleroma, :frontend_configurations,
alwaysShowSubjectInput: true, alwaysShowSubjectInput: true,
background: "/images/city.jpg", background: "/images/city.jpg",
collapseMessageWithSubject: false, collapseMessageWithSubject: false,
disableChat: false,
greentext: false, greentext: false,
hideFilteredStatuses: false, hideFilteredStatuses: false,
hideMutedPosts: false, hideMutedPosts: false,
@ -381,6 +399,7 @@ config :pleroma, :mrf_simple,
accept: [], accept: [],
avatar_removal: [], avatar_removal: [],
banner_removal: [], banner_removal: [],
background_removal: [],
reject_deletes: [], reject_deletes: [],
handle_threads: true handle_threads: true
@ -409,8 +428,6 @@ config :pleroma, :mrf_object_age,
threshold: 604_800, threshold: 604_800,
actions: [:delist, :strip_followers] actions: [:delist, :strip_followers]
config :pleroma, :mrf_follow_bot, follower_nickname: nil
config :pleroma, :mrf_reject_newly_created_account_notes, age: 86_400 config :pleroma, :mrf_reject_newly_created_account_notes, age: 86_400
config :pleroma, :rich_media, config :pleroma, :rich_media,
@ -421,8 +438,12 @@ config :pleroma, :rich_media,
Pleroma.Web.RichMedia.Parsers.TwitterCard, Pleroma.Web.RichMedia.Parsers.TwitterCard,
Pleroma.Web.RichMedia.Parsers.OEmbed Pleroma.Web.RichMedia.Parsers.OEmbed
], ],
failure_backoff: :timer.minutes(20), failure_backoff: 60_000,
ttl_setters: [Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl] ttl_setters: [
Pleroma.Web.RichMedia.Parser.TTL.AwsSignedUrl,
Pleroma.Web.RichMedia.Parser.TTL.Opengraph
],
max_body: 5_000_000
config :pleroma, :media_proxy, config :pleroma, :media_proxy,
enabled: false, enabled: false,
@ -560,7 +581,9 @@ config :pleroma, Oban,
mute_expire: 5, mute_expire: 5,
search_indexing: 10, search_indexing: 10,
nodeinfo_fetcher: 1, nodeinfo_fetcher: 1,
database_prune: 1 database_prune: 1,
rich_media_backfill: 2,
rich_media_expiration: 2
], ],
plugins: [ plugins: [
Oban.Plugins.Pruner, Oban.Plugins.Pruner,
@ -576,7 +599,8 @@ config :pleroma, :workers,
retries: [ retries: [
federator_incoming: 5, federator_incoming: 5,
federator_outgoing: 5, federator_outgoing: 5,
search_indexing: 2 search_indexing: 2,
rich_media_backfill: 3
], ],
timeout: [ timeout: [
activity_expiration: :timer.seconds(5), activity_expiration: :timer.seconds(5),
@ -598,7 +622,8 @@ config :pleroma, :workers,
mute_expire: :timer.seconds(5), mute_expire: :timer.seconds(5),
search_indexing: :timer.seconds(5), search_indexing: :timer.seconds(5),
nodeinfo_fetcher: :timer.seconds(10), nodeinfo_fetcher: :timer.seconds(10),
database_prune: :timer.minutes(10) database_prune: :timer.minutes(10),
rich_media_backfill: :timer.seconds(30)
] ]
config :pleroma, Pleroma.Formatter, config :pleroma, Pleroma.Formatter,
@ -796,6 +821,12 @@ config :pleroma, :modules, runtime_dir: "instance/modules"
config :pleroma, configurable_from_database: false config :pleroma, configurable_from_database: false
config :pleroma, Pleroma.Repo,
parameters: [
gin_fuzzy_search_limit: "500",
plan_cache_mode: "force_custom_plan"
]
config :pleroma, :majic_pool, size: 2 config :pleroma, :majic_pool, size: 2
private_instance? = :if_instance_is_private private_instance? = :if_instance_is_private

View file

@ -1,5 +1,16 @@
import Config import Config
websocket_config = [
path: "/websocket",
serializer: [
{Phoenix.Socket.V1.JSONSerializer, "~> 1.0.0"},
{Phoenix.Socket.V2.JSONSerializer, "~> 2.0.0"}
],
timeout: 60_000,
transport_log: false,
compress: false
]
installed_frontend_options = [ installed_frontend_options = [
%{ %{
key: "name", key: "name",
@ -89,18 +100,23 @@ config :pleroma, :config_description, [
label: "Base URL", label: "Base URL",
type: :string, type: :string,
description: description:
"Base URL for the uploads. Required if you use a CDN or host attachments under a different domain.", "Base URL for the uploads. Required if you use a CDN or host attachments under a different domain - it is HIGHLY recommended that you **do not** set this to be the same as the domain akkoma is hosted on.",
suggestions: [ suggestions: [
"https://cdn-host.com" "https://media.akkoma.dev/media/"
] ]
}, },
%{ %{
key: :proxy_remote, key: :allowed_mime_types,
type: :boolean, label: "Allowed MIME types",
description: """ type: {:list, :string},
Proxy requests to the remote uploader.\n description:
Useful if media upload endpoint is not internet accessible. "List of MIME (main) types uploads are allowed to identify themselves with. Other types may still be uploaded, but will identify as a generic binary to clients. WARNING: Loosening this over the defaults can lead to security issues. Removing types is safe, but only add to the list if you are sure you know what you are doing.",
""" suggestions: [
"image",
"audio",
"video",
"font"
]
}, },
%{ %{
key: :filename_display_max_length, key: :filename_display_max_length,
@ -198,6 +214,26 @@ config :pleroma, :config_description, [
} }
] ]
}, },
%{
group: :pleroma,
key: Pleroma.Upload.Filter.Exiftool.StripMetadata,
type: :group,
description: "Strip specified metadata from image uploads",
children: [
%{
key: :purge,
description: "Metadata fields or groups to strip",
type: {:list, :string},
suggestions: ["all", "CommonIFD0"]
},
%{
key: :preserve,
description: "Metadata fields or groups to preserve (takes precedence over stripping)",
type: {:list, :string},
suggestions: ["ColorSpaces", "Orientation"]
}
]
},
%{ %{
group: :pleroma, group: :pleroma,
key: Pleroma.Emails.Mailer, key: Pleroma.Emails.Mailer,
@ -423,7 +459,6 @@ config :pleroma, :config_description, [
label: "URI Schemes", label: "URI Schemes",
type: :group, type: :group,
description: "URI schemes related settings", description: "URI schemes related settings",
db_exclusion_reason: "Does not make sense to configure dynamically",
children: [ children: [
%{ %{
key: :valid_schemes, key: :valid_schemes,
@ -454,7 +489,6 @@ config :pleroma, :config_description, [
key: :features, key: :features,
type: :group, type: :group,
description: "Customizable features", description: "Customizable features",
db_exclusion_reason: "Should be provided at boot-time",
children: [ children: [
%{ %{
key: :improved_hashtag_timeline, key: :improved_hashtag_timeline,
@ -470,7 +504,6 @@ config :pleroma, :config_description, [
key: :populate_hashtags_table, key: :populate_hashtags_table,
type: :group, type: :group,
description: "`populate_hashtags_table` background migration settings", description: "`populate_hashtags_table` background migration settings",
db_exclusion_reason: "Should be provided at boot-time",
children: [ children: [
%{ %{
key: :fault_rate_allowance, key: :fault_rate_allowance,
@ -1641,7 +1674,6 @@ config :pleroma, :config_description, [
key: Pleroma.Web.MediaProxy.Invalidation.Script, key: Pleroma.Web.MediaProxy.Invalidation.Script,
type: :group, type: :group,
description: "Invalidation script settings", description: "Invalidation script settings",
db_exclusion_reason: "Provides an arbitrary execution path",
children: [ children: [
%{ %{
key: :script_path, key: :script_path,
@ -1755,7 +1787,6 @@ config :pleroma, :config_description, [
%{ %{
group: :web_push_encryption, group: :web_push_encryption,
key: :vapid_details, key: :vapid_details,
db_exclusion_reason: "Webserver secret keys",
label: "Vapid Details", label: "Vapid Details",
type: :group, type: :group,
description: description:
@ -1827,13 +1858,19 @@ config :pleroma, :config_description, [
%{ %{
group: :pleroma, group: :pleroma,
label: "Pleroma Admin Token", label: "Pleroma Admin Token",
type: :group,
description: description:
"Allows setting a token that can be used to authenticate requests with admin privileges without a normal user account token. Append the `admin_token` parameter to requests to utilize it. (Please reconsider using HTTP Basic Auth or OAuth-based authentication if possible)", "Allows setting a token that can be used to authenticate requests with admin privileges without a normal user account token. Append the `admin_token` parameter to requests to utilize it. (Please reconsider using HTTP Basic Auth or OAuth-based authentication if possible)",
type: :string, children: [
suggestions: [ %{
"Please use a high entropy string or UUID" key: :admin_token,
], type: :string,
db_exclusion_reason: "Can provide passwordless admin access" description: "Admin token",
suggestions: [
"Please use a high entropy string or UUID"
]
}
]
}, },
%{ %{
group: :pleroma, group: :pleroma,
@ -2177,7 +2214,6 @@ config :pleroma, :config_description, [
label: "Pleroma Authenticator", label: "Pleroma Authenticator",
type: :group, type: :group,
description: "Authenticator", description: "Authenticator",
db_exclusion_reason: "Should be provided at boot-time",
children: [ children: [
%{ %{
key: Pleroma.Web.Auth.Authenticator, key: Pleroma.Web.Auth.Authenticator,
@ -2191,7 +2227,6 @@ config :pleroma, :config_description, [
key: :ldap, key: :ldap,
label: "LDAP", label: "LDAP",
type: :group, type: :group,
db_exclusion_reason: "Provides access to another service",
description: description:
"Use LDAP for user authentication. When a user logs in to the Pleroma instance, the name and password" <> "Use LDAP for user authentication. When a user logs in to the Pleroma instance, the name and password" <>
" will be verified by trying to authenticate (bind) to a LDAP server." <> " will be verified by trying to authenticate (bind) to a LDAP server." <>
@ -2591,7 +2626,6 @@ config :pleroma, :config_description, [
label: "Mime Types", label: "Mime Types",
type: :group, type: :group,
description: "Mime Types settings", description: "Mime Types settings",
db_exclusion_reason: "Should be provided at compile-time",
children: [ children: [
%{ %{
key: :types, key: :types,
@ -2675,8 +2709,8 @@ config :pleroma, :config_description, [
%{ %{
key: :pool_size, key: :pool_size,
type: :integer, type: :integer,
description: "Number of concurrent outbound HTTP requests to allow. Default 50.", description: "Number of concurrent outbound HTTP requests to allow PER HOST. Default 10.",
suggestions: [50] suggestions: [10]
}, },
%{ %{
key: :adapter, key: :adapter,
@ -2699,6 +2733,13 @@ config :pleroma, :config_description, [
] ]
} }
] ]
},
%{
key: :pool_max_idle_time,
type: :integer,
description:
"Number of seconds to retain an HTTP pool; pool will remain if actively in use. Default 30 seconds (in ms).",
suggestions: [30_000]
} }
] ]
}, },
@ -2798,7 +2839,6 @@ config :pleroma, :config_description, [
group: :cors_plug, group: :cors_plug,
label: "CORS plug config", label: "CORS plug config",
type: :group, type: :group,
db_exclusion_reason: "Should be provided at compile-time",
children: [ children: [
%{ %{
key: :max_age, key: :max_age,
@ -2956,7 +2996,6 @@ config :pleroma, :config_description, [
key: :modules, key: :modules,
type: :group, type: :group,
description: "Custom Runtime Modules", description: "Custom Runtime Modules",
db_exclusion_reason: "Allows for custom elixir execution",
children: [ children: [
%{ %{
key: :runtime_dir, key: :runtime_dir,
@ -3093,7 +3132,6 @@ config :pleroma, :config_description, [
group: :ex_aws, group: :ex_aws,
key: :s3, key: :s3,
type: :group, type: :group,
db_exclusion_reason: "Provides access to another service",
descriptions: "S3 service related settings", descriptions: "S3 service related settings",
children: [ children: [
%{ %{
@ -3473,7 +3511,6 @@ config :pleroma, :config_description, [
key: :argos_translate, key: :argos_translate,
type: :group, type: :group,
description: "ArgosTranslate Settings.", description: "ArgosTranslate Settings.",
db_exclusion_reason: "Excluded for being able to set arbitrary paths to executables",
children: [ children: [
%{ %{
key: :command_argos_translate, key: :command_argos_translate,

View file

@ -1,25 +0,0 @@
import Config
config :pleroma, Pleroma.Web.Endpoint,
http: [
port: String.to_integer(System.get_env("PORT") || "4000"),
protocol_options: [max_request_line_length: 8192, max_header_value_length: 8192]
],
protocol: "http",
secure_cookie_flag: false,
url: [host: System.get_env("APP_HOST"), scheme: "https", port: 443],
secret_key_base: "+S+ULgf7+N37c/lc9K66SMphnjQIRGklTu0BRr2vLm2ZzvK0Z6OH/PE77wlUNtvP"
database_url =
System.get_env("DATABASE_URL") ||
raise """
environment variable DATABASE_URL is missing.
For example: ecto://USER:PASS@HOST/DATABASE
"""
config :pleroma, Pleroma.Repo,
# ssl: true,
url: database_url,
pool_size: String.to_integer(System.get_env("POOL_SIZE") || "10")
config :pleroma, :instance, name: "#{System.get_env("APP_NAME")} CI Instance"

View file

@ -22,9 +22,12 @@ config :logger, :console,
config :pleroma, :auth, oauth_consumer_strategies: [] config :pleroma, :auth, oauth_consumer_strategies: []
config :pleroma, Pleroma.Upload, config :pleroma, Pleroma.Upload,
base_url: "http://localhost:4001/media/",
filters: [], filters: [],
link_name: false link_name: false
config :pleroma, :media_proxy, base_url: "http://localhost:4001"
config :pleroma, Pleroma.Uploaders.Local, uploads: "test/uploads" config :pleroma, Pleroma.Uploaders.Local, uploads: "test/uploads"
config :pleroma, Pleroma.Emails.Mailer, adapter: Swoosh.Adapters.Test, enabled: true config :pleroma, Pleroma.Emails.Mailer, adapter: Swoosh.Adapters.Test, enabled: true
@ -60,7 +63,8 @@ config :tesla, adapter: Tesla.Mock
config :pleroma, :rich_media, config :pleroma, :rich_media,
enabled: false, enabled: false,
ignore_hosts: [], ignore_hosts: [],
ignore_tld: ["local", "localdomain", "lan"] ignore_tld: ["local", "localdomain", "lan"],
max_body: 2_000_000
config :pleroma, :instance, config :pleroma, :instance,
multi_factor_authentication: [ multi_factor_authentication: [
@ -138,6 +142,8 @@ config :phoenix, :plug_init_mode, :runtime
config :pleroma, :instances_favicons, enabled: false config :pleroma, :instances_favicons, enabled: false
config :pleroma, :instances_nodeinfo, enabled: false config :pleroma, :instances_nodeinfo, enabled: false
config :pleroma, Pleroma.Web.RichMedia.Backfill, provider: Pleroma.Web.RichMedia.Backfill
if File.exists?("./config/test.secret.exs") do if File.exists?("./config/test.secret.exs") do
import_config "test.secret.exs" import_config "test.secret.exs"
else else

View file

@ -1,7 +0,0 @@
{
"skip_files": [
"test/support",
"lib/mix/tasks/pleroma/benchmark.ex",
"lib/credo/check/consistency/file_location.ex"
]
}

View file

@ -0,0 +1,10 @@
if [ "$#" -ne 2 ]; then
echo "Usage: binary-leak-checker.sh <nodename> <erlang cookie>"
exit 1
fi
echo "The command you want to run is:
:recon.bin_leak(10)
"
iex --sname debug --remsh $1 --erl "-setcookie $2"

View file

@ -4,6 +4,7 @@ services:
db: db:
image: akkoma-db:latest image: akkoma-db:latest
build: ./docker-resources/database build: ./docker-resources/database
shm_size: 4gb
restart: unless-stopped restart: unless-stopped
user: ${DOCKER_USER} user: ${DOCKER_USER}
environment: { environment: {
@ -45,7 +46,7 @@ services:
volumes: volumes:
- .:/opt/akkoma - .:/opt/akkoma
# Uncomment the following if you want to use a reverse proxy # Copy this into docker-compose.override.yml and uncomment there if you want to use a reverse proxy
#proxy: #proxy:
# image: caddy:2-alpine # image: caddy:2-alpine
# restart: unless-stopped # restart: unless-stopped

View file

@ -50,9 +50,39 @@ This will prune remote posts older than 90 days (configurable with [`config :ple
- `--keep-threads` - Don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...). It also wont delete posts when at least one of the posts in that thread is kept (e.g. because one of the posts has seen recent activity). - `--keep-threads` - Don't prune posts when they are part of a thread where at least one post has seen local interaction (e.g. one of the posts is a local post, or is favourited by a local user, or has been repeated by a local user...). It also wont delete posts when at least one of the posts in that thread is kept (e.g. because one of the posts has seen recent activity).
- `--keep-non-public` - Keep non-public posts like DM's and followers-only, even if they are remote. - `--keep-non-public` - Keep non-public posts like DM's and followers-only, even if they are remote.
- `--limit` - limits how many remote posts get pruned. This limit does **not** apply to any of the follow up jobs. If wanting to keep the database load in check it is thus advisable to run the standalone `prune_orphaned_activities` task with a limit afterwards instead of passing `--prune-orphaned-activities` to this task.
- `--prune-orphaned-activities` - Also prune orphaned activities afterwards. Activities are things like Like, Create, Announce, Flag (aka reports)... They can significantly help reduce the database size. - `--prune-orphaned-activities` - Also prune orphaned activities afterwards. Activities are things like Like, Create, Announce, Flag (aka reports)... They can significantly help reduce the database size.
- `--vacuum` - Run `VACUUM FULL` after the objects are pruned. This should not be used on a regular basis, but is useful if your instance has been running for a long time before pruning. - `--vacuum` - Run `VACUUM FULL` after the objects are pruned. This should not be used on a regular basis, but is useful if your instance has been running for a long time before pruning.
## Prune orphaned activities from the database
This will prune activities which are no longer referenced by anything.
Such activities might be the result of running `prune_objects` without `--prune-orphaned-activities`.
The same notes and warnings apply as for `prune_objects`.
The task will print out how many rows were freed in total in its last
line of output in the form `Deleted 345 rows`.
When running the job in limited batches this can be used to determine
when all orphaned activities have been deleted.
=== "OTP"
```sh
./bin/pleroma_ctl database prune_orphaned_activities [option ...]
```
=== "From Source"
```sh
mix pleroma.database prune_orphaned_activities [option ...]
```
### Options
- `--limit n` - Only delete up to `n` activities in each query making up this job, i.e. if this job runs two queries at most `2n` activities will be deleted. Running this task repeatedly in limited batches can help maintain the instances responsiveness while still freeing up some space.
- `--no-singles` - Do not delete activites referencing single objects
- `--no-arrays` - Do not delete activites referencing an array of objects
## Create a conversation for all existing DMs ## Create a conversation for all existing DMs
Can be safely re-run Can be safely re-run

View file

@ -37,7 +37,8 @@ If any of the options are left unspecified, you will be prompted interactively.
- `--static-dir <path>` - the directory custom public files should be read from (custom emojis, frontend bundle overrides, robots.txt, etc.) - `--static-dir <path>` - the directory custom public files should be read from (custom emojis, frontend bundle overrides, robots.txt, etc.)
- `--listen-ip <ip>` - the ip the app should listen to, defaults to 127.0.0.1 - `--listen-ip <ip>` - the ip the app should listen to, defaults to 127.0.0.1
- `--listen-port <port>` - the port the app should listen to, defaults to 4000 - `--listen-port <port>` - the port the app should listen to, defaults to 4000
- `--strip-uploads <Y|N>` - use ExifTool to strip uploads of sensitive location data - `--strip-uploads-metadata <Y|N>` - use ExifTool to strip uploads of metadata when possible
- `--read-uploads-description <Y|N>` - use ExifTool to read image descriptions from uploads
- `--anonymize-uploads <Y|N>` - randomize uploaded filenames - `--anonymize-uploads <Y|N>` - randomize uploaded filenames
- `--dedupe-uploads <Y|N>` - store files based on their hash to reduce data storage requirements if duplicates are uploaded with different filenames - `--dedupe-uploads <Y|N>` - store files based on their hash to reduce data storage requirements if duplicates are uploaded with different filenames
- `--skip-release-env` - skip generation the release environment file - `--skip-release-env` - skip generation the release environment file

View file

@ -17,5 +17,5 @@ If you want to generate a restrictive `robots.txt`, you can run the following mi
=== "From Source" === "From Source"
```sh ```sh
mix pleroma.robotstxt disallow_all mix pleroma.robots_txt disallow_all
``` ```

View file

@ -0,0 +1,56 @@
# Security-related tasks
{! administration/CLI_tasks/general_cli_task_info.include !}
!!! danger
Many of these tasks were written in response to a patched exploit.
It is recommended to run those very soon after installing its respective security update.
Over time with db migrations they might become less accurate or be removed altogether.
If you never ran an affected version, theres no point in running them.
## Spoofed AcitivityPub objects exploit (2024-03, fixed in 3.11.1)
### Search for uploaded spoofing payloads
Scans local uploads for spoofing payloads.
If the instance is not using the local uploader it was not affected.
Attachments wil be scanned anyway in case local uploader was used in the past.
!!! note
This cannot reliably detect payloads attached to deleted posts.
=== "OTP"
```sh
./bin/pleroma_ctl security spoof-uploaded
```
=== "From Source"
```sh
mix pleroma.security spoof-uploaded
```
### Search for counterfeit posts in database
Scans all notes in the database for signs of being spoofed.
!!! note
Spoofs targeting local accounts can be detected rather reliably
(with some restrictions documented in the tasks logs).
Counterfeit posts from remote users cannot. A best-effort attempt is made, but
a thorough attacker can avoid this and it may yield a small amount of false positives.
Should you find counterfeit posts of local users, let other admins know so they can delete the too.
=== "OTP"
```sh
./bin/pleroma_ctl security spoof-inserted
```
=== "From Source"
```sh
mix pleroma.security spoof-inserted
```

View file

@ -4,12 +4,12 @@
1. Stop the Akkoma service. 1. Stop the Akkoma service.
2. Go to the working directory of Akkoma (default is `/opt/akkoma`) 2. Go to the working directory of Akkoma (default is `/opt/akkoma`)
3. Run[¹] `sudo -Hu postgres pg_dump -d akkoma --format=custom -f </path/to/backup_location/akkoma.pgdump>` (make sure the postgres user has write access to the destination file) 3. Run `sudo -Hu postgres pg_dump -d akkoma --format=custom -f </path/to/backup_location/akkoma.pgdump>`[¹] (make sure the postgres user has write access to the destination file)
4. Copy `akkoma.pgdump`, `config/prod.secret.exs`[²], `config/setup_db.psql` (if still available) and the `uploads` folder to your backup destination. If you have other modifications, copy those changes too. 4. Copy `akkoma.pgdump`, `config/config.exs`[²], `uploads` folder, and [static directory](../configuration/static_dir.md) to your backup destination. If you have other modifications, copy those changes too.
5. Restart the Akkoma service. 5. Restart the Akkoma service.
[¹]: We assume the database name is "akkoma". If not, you can find the correct name in your config files. [¹]: We assume the database name is "akkoma". If not, you can find the correct name in your configuration files.
[²]: If you've installed using OTP, you need `config/config.exs` instead of `config/prod.secret.exs`. [²]: If you have a from source installation, you need `config/prod.secret.exs` instead of `config/config.exs`. The `config/config.exs` file also exists, but in case of from source installations, it only contains the default values and it is tracked by Git, so you don't need to back it up.
## Restore/Move ## Restore/Move
@ -17,19 +17,16 @@
2. Stop the Akkoma service. 2. Stop the Akkoma service.
3. Go to the working directory of Akkoma (default is `/opt/akkoma`) 3. Go to the working directory of Akkoma (default is `/opt/akkoma`)
4. Copy the above mentioned files back to their original position. 4. Copy the above mentioned files back to their original position.
5. Drop the existing database and user if restoring in-place[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';` `sudo -Hu postgres psql -c 'DROP USER akkoma;'` 5. Drop the existing database and user[¹]. `sudo -Hu postgres psql -c 'DROP DATABASE akkoma;';` `sudo -Hu postgres psql -c 'DROP USER akkoma;'`
6. Restore the database schema and akkoma role using either of the following options 6. Restore the database schema and akkoma role[¹] (replace the password with the one you find in the configuration file), `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-configuration-file>';"` `sudo -Hu postgres psql -c "CREATE DATABASE akkoma OWNER akkoma;"`.
* You can use the original `setup_db.psql` if you have it[²]: `sudo -Hu postgres psql -f config/setup_db.psql`.
* Or recreate the database and user yourself (replace the password with the one you find in the config file) `sudo -Hu postgres psql -c "CREATE USER akkoma WITH ENCRYPTED PASSWORD '<database-password-wich-you-can-find-in-your-config-file>'; CREATE DATABASE akkoma OWNER akkoma;"`.
7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>` 7. Now restore the Akkoma instance's data into the empty database schema[¹]: `sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>`
8. If you installed a newer Akkoma version, you should run `MIX_ENV=prod mix ecto.migrate`[³]. This task performs database migrations, if there were any. 8. If you installed a newer Akkoma version, you should run the database migrations `./bin/pleroma_ctl migrate`[²].
9. Restart the Akkoma service. 9. Restart the Akkoma service.
10. Run `sudo -Hu postgres vacuumdb --all --analyze-in-stages`. This will quickly generate the statistics so that postgres can properly plan queries. 10. Run `sudo -Hu postgres vacuumdb --all --analyze-in-stages`. This will quickly generate the statistics so that postgres can properly plan queries.
11. If setting up on a new server configure Nginx by using the `installation/akkoma.nginx` config sample or reference the Akkoma installation guide for your OS which contains the Nginx configuration instructions. 11. If setting up on a new server, configure Nginx by using the `installation/nginx/akkoma.nginx` configuration sample or reference the Akkoma installation guide which contains the Nginx configuration instructions.
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files. [¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your configuration files.
[²]: You can recreate the `config/setup_db.psql` by running the `mix pleroma.instance gen` task again. You can ignore most of the questions, but make the database user, name, and password the same as found in your backed up config file. This will also create a new `config/generated_config.exs` file which you may delete as it is not needed. [²]: If you have a from source installation, the command is `MIX_ENV=prod mix ecto.migrate`. Note that we prefix with `MIX_ENV=prod` to use the `config/prod.secret.exs` configuration file.
[³]: Prefix with `MIX_ENV=prod` to run it using the production config file.
## Remove ## Remove
@ -45,3 +42,16 @@
8. Remove the dependencies that you don't need anymore (see installation guide). Make sure you don't remove packages that are still needed for other software that you have running! 8. Remove the dependencies that you don't need anymore (see installation guide). Make sure you don't remove packages that are still needed for other software that you have running!
[¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files. [¹]: We assume the database name and user are both "akkoma". If not, you can find the correct name in your config files.
## Docker installations
If running behind Docker, it is required to run the above commands inside of a running database container.
### Example
Running `docker compose run --rm db pg_dump <...>` will fail and return:
```
pg_dump: error: connection to server on socket "/run/postgresql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?"
```
However, first starting just the database container with `docker compose up db -d`, and then running `docker compose exec db pg_dump -d akkoma --format=custom -f </your/backup/dir/akkoma.pgdump>` will successfully generate a database dump.
Then to make the file accessible on the host system you can run `docker compose cp db:</your/backup/dir/akkoma.pgdump> </your/target/location>` to copy if from the container.

View file

@ -3,7 +3,7 @@
If you run akkoma, you may be inclined to collect metrics to ensure your instance is running smoothly, If you run akkoma, you may be inclined to collect metrics to ensure your instance is running smoothly,
and that there's nothing quietly failing in the background. and that there's nothing quietly failing in the background.
To facilitate this, akkoma exposes prometheus metrics to be scraped. To facilitate this, akkoma exposes a dashboard and prometheus metrics to be scraped.
## Prometheus ## Prometheus
@ -31,3 +31,15 @@ Once you have your token of the form `Bearer $ACCESS_TOKEN`, you can use that in
- targets: - targets:
- example.com - example.com
``` ```
## Dashboard
Administrators can access a live dashboard under `/phoenix/live_dashboard`
giving an overview of uptime, software versions, database stats and more.
The dashboard also includes a variation of the prometheus metrics, however
they do not exactly match due to respective limitations of the dashboard
and the prometheus exporter.
Even more important, the dashboard collects metrics locally in the browser
only while the page is open and cannot give a view on their past history.
For proper monitoring it is recommended to set up prometheus.

View file

@ -1,12 +1,15 @@
# Akkoma Clients # Akkoma Clients
Note: Additional clients may work, but these are known to work with Akkoma. This is a list of clients that are known to work with Akkoma.
Apps listed here might not support all of Akkoma's features.
!!! warning
**Clients listed here are not officially supported by the Akkoma project.**
Some Akkoma features may be unsupported by these clients.
## Multiplatform ## Multiplatform
### Kaiteki ### Kaiteki
- Homepage: <https://kaiteki.app/> - Homepage: <https://kaiteki.app/>
- Source Code: <https://github.com/Kaiteki-Fedi/Kaiteki> - Source Code: <https://github.com/Kaiteki-Fedi/Kaiteki>
- Contact: [@kaiteki@fedi.software](https://fedi.software/@Kaiteki) - Contact: [@kaiteki@social.kaiteki.app](https://social.kaiteki.app/@kaiteki)
- Platforms: Web, Windows, Linux, Android - Platforms: Web, Windows, Linux, Android
- Features: MastoAPI, Supports multiple backends - Features: MastoAPI, Supports multiple backends
@ -38,12 +41,6 @@ Apps listed here might not support all of Akkoma's features.
- Platforms: Android - Platforms: Android
- Features: MastoAPI, No Streaming, Emoji Reactions, Text Formatting, FE Stickers - Features: MastoAPI, No Streaming, Emoji Reactions, Text Formatting, FE Stickers
### Fedi
- Homepage: <https://www.fediapp.com/>
- Source Code: Proprietary, but gratis
- Platforms: iOS, Android
- Features: MastoAPI, Pleroma-specific features like Reactions
### Tusky ### Tusky
- Homepage: <https://tuskyapp.github.io/> - Homepage: <https://tuskyapp.github.io/>
- Source Code: <https://github.com/tuskyapp/Tusky> - Source Code: <https://github.com/tuskyapp/Tusky>
@ -51,12 +48,18 @@ Apps listed here might not support all of Akkoma's features.
- Platforms: Android - Platforms: Android
- Features: MastoAPI, No Streaming - Features: MastoAPI, No Streaming
### Subway Tooter
- Source Code: <https://github.com/tateisu/SubwayTooter/>
- Contact: [@SubwayTooter@mastodon.juggler.jp](https://mastodon.juggler.jp/@SubwayTooter)
- Platforms: Android
- Features: MastoAPI, Editing, Emoji Reactions (including custom emoji)
## Alternative Web Interfaces ## Alternative Web Interfaces
### Pinafore ### Enafore
- Note: Pinafore is unmaintained (See [the author's original article](https://nolanlawson.com/2023/01/09/retiring-pinafore/) for details) - An actively developed fork of Pinafore with improved Akkoma support
- Homepage: <https://pinafore.social/> - Homepage: <https://enafore.social/>
- Source Code: <https://github.com/nolanlawson/pinafore> - Source Code: <https://github.com/enafore/enafore>
- Contact: [@pinafore@mastodon.technology](https://mastodon.technology/users/pinafore) - Contact: [@enfore@enafore.social](https://meta.enafore.social/@enafore)
- Features: MastoAPI, No Streaming - Features: MastoAPI, No Streaming
### Sengi ### Sengi

View file

@ -63,6 +63,8 @@ To add configuration to your config file, you can copy it from the base config.
* `local_bubble`: Array of domains representing instances closely related to yours. Used to populate the `bubble` timeline. e.g `["example.com"]`, (default: `[]`) * `local_bubble`: Array of domains representing instances closely related to yours. Used to populate the `bubble` timeline. e.g `["example.com"]`, (default: `[]`)
* `languages`: List of Language Codes used by the instance. This is used to try and set a default language from the frontend. It will try and find the first match between the languages set here and the user's browser languages. It will default to the first language in this setting if there is no match.. (default `["en"]`) * `languages`: List of Language Codes used by the instance. This is used to try and set a default language from the frontend. It will try and find the first match between the languages set here and the user's browser languages. It will default to the first language in this setting if there is no match.. (default `["en"]`)
* `export_prometheus_metrics`: Enable prometheus metrics, served at `/api/v1/akkoma/metrics`, requiring the `admin:metrics` oauth scope. * `export_prometheus_metrics`: Enable prometheus metrics, served at `/api/v1/akkoma/metrics`, requiring the `admin:metrics` oauth scope.
* `privileged_staff`: Set to `true` to give moderators access to a few higher responsibility actions.
* `federated_timeline_available`: Set to `false` to remove access to the federated timeline for all users.
## :database ## :database
* `improved_hashtag_timeline`: Setting to force toggle / force disable improved hashtags timeline. `:enabled` forces hashtags to be fetched from `hashtags` table for hashtags timeline. `:disabled` forces object-embedded hashtags to be used (slower). Keep it `:auto` for automatic behaviour (it is auto-set to `:enabled` [unless overridden] when HashtagsTableMigrator completes). * `improved_hashtag_timeline`: Setting to force toggle / force disable improved hashtags timeline. `:enabled` forces hashtags to be fetched from `hashtags` table for hashtags timeline. `:disabled` forces object-embedded hashtags to be used (slower). Keep it `:auto` for automatic behaviour (it is auto-set to `:enabled` [unless overridden] when HashtagsTableMigrator completes).
@ -104,31 +106,60 @@ To add configuration to your config file, you can copy it from the base config.
## Message rewrite facility ## Message rewrite facility
### :mrf ### :mrf
* `policies`: Message Rewrite Policy, either one or a list. Here are the ones available by default:
* `Pleroma.Web.ActivityPub.MRF.NoOpPolicy`: Doesnt modify activities (default).
* `Pleroma.Web.ActivityPub.MRF.DropPolicy`: Drops all activities. It generally doesnt makes sense to use in production.
* `Pleroma.Web.ActivityPub.MRF.SimplePolicy`: Restrict the visibility of activities from certains instances (See [`:mrf_simple`](#mrf_simple)).
* `Pleroma.Web.ActivityPub.MRF.TagPolicy`: Applies policies to individual users based on tags, which can be set using pleroma-fe/admin-fe/any other app that supports Pleroma Admin API. For example it allows marking posts from individual users nsfw (sensitive).
* `Pleroma.Web.ActivityPub.MRF.SubchainPolicy`: Selectively runs other MRF policies when messages match (See [`:mrf_subchain`](#mrf_subchain)).
* `Pleroma.Web.ActivityPub.MRF.RejectNonPublic`: Drops posts with non-public visibility settings (See [`:mrf_rejectnonpublic`](#mrf_rejectnonpublic)).
* `Pleroma.Web.ActivityPub.MRF.EnsureRePrepended`: Rewrites posts to ensure that replies to posts with subjects do not have an identical subject and instead begin with re:.
* `Pleroma.Web.ActivityPub.MRF.AntiLinkSpamPolicy`: Rejects posts from likely spambots by rejecting posts from new users that contain links.
* `Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy`: Crawls attachments using their MediaProxy URLs so that the MediaProxy cache is primed.
* `Pleroma.Web.ActivityPub.MRF.MentionPolicy`: Drops posts mentioning configurable users. (See [`:mrf_mention`](#mrf_mention)).
* `Pleroma.Web.ActivityPub.MRF.VocabularyPolicy`: Restricts activities to a configured set of vocabulary. (See [`:mrf_vocabulary`](#mrf_vocabulary)).
* `Pleroma.Web.ActivityPub.MRF.ObjectAgePolicy`: Rejects or delists posts based on their age when received. (See [`:mrf_object_age`](#mrf_object_age)).
* `Pleroma.Web.ActivityPub.MRF.ActivityExpirationPolicy`: Sets a default expiration on all posts made by users of the local instance. Requires `Pleroma.Workers.PurgeExpiredActivity` to be enabled for processing the scheduled delections.
* `Pleroma.Web.ActivityPub.MRF.ForceBotUnlistedPolicy`: Makes all bot posts to disappear from public timelines.
* `Pleroma.Web.ActivityPub.MRF.FollowBotPolicy`: Automatically follows newly discovered users from the specified bot account. Local accounts, locked accounts, and users with "#nobot" in their bio are respected and excluded from being followed.
* `Pleroma.Web.ActivityPub.MRF.AntiFollowbotPolicy`: Drops follow requests from followbots. Users can still allow bots to follow them by first following the bot.
* `Pleroma.Web.ActivityPub.MRF.KeywordPolicy`: Rejects or removes from the federated timeline or replaces keywords. (See [`:mrf_keyword`](#mrf_keyword)).
* `Pleroma.Web.ActivityPub.MRF.NormalizeMarkup`: Pass inbound HTML through a scrubber to make sure it doesn't have anything unusual in it. On by default, cannot be turned off.
* `Pleroma.Web.ActivityPub.MRF.InlineQuotePolicy`: Append a link to a post that quotes another post with the link to the quoted post, to ensure that software that does not understand quotes can have full context. On by default, cannot be turned off.
* `transparency`: Make the content of your Message Rewrite Facility settings public (via nodeinfo). * `transparency`: Make the content of your Message Rewrite Facility settings public (via nodeinfo).
* `transparency_exclusions`: Exclude specific instance names from MRF transparency. The use of the exclusions feature will be disclosed in nodeinfo as a boolean value. * `transparency_exclusions`: Exclude specific instance names from MRF transparency. The use of the exclusions feature will be disclosed in nodeinfo as a boolean value.
* `transparency_obfuscate_domains`: Show domains with `*` in the middle, to censor them if needed. For example, `ridingho.me` will show as `rid*****.me` * `transparency_obfuscate_domains`: Show domains with `*` in the middle, to censor them if needed. For example, `ridingho.me` will show as `rid*****.me`
* `policies`: Message Rewrite Policy, either one or a list. Here are the ones available by default:
* `Pleroma.Web.ActivityPub.MRF.NoOpPolicy`: Doesnt modify activities (default).
* `Pleroma.Web.ActivityPub.MRF.DropPolicy`: Drops all activities. It generally doesnt makes sense to use in production.
* `Pleroma.Web.ActivityPub.MRF.ActivityExpirationPolicy`: Sets a default expiration on all posts made by users of the local instance. Requires `Pleroma.Workers.PurgeExpiredActivity` to be enabled for processing the scheduled delections.
(See [`:mrf_activity_expiration`](#mrf_activity_expiration))
* `Pleroma.Web.ActivityPub.MRF.AntiFollowbotPolicy`: Drops follow requests from followbots. Users can still allow bots to follow them by first following the bot.
* `Pleroma.Web.ActivityPub.MRF.AntiLinkSpamPolicy`: Rejects posts from likely spambots by rejecting posts from new users that contain links.
* `Pleroma.Web.ActivityPub.MRF.EnsureRePrepended`: Rewrites posts to ensure that replies to posts with subjects do not have an identical subject and instead begin with re:.
* `Pleroma.Web.ActivityPub.MRF.ForceBotUnlistedPolicy`: Makes all bot posts to disappear from public timelines.
* `Pleroma.Web.ActivityPub.MRF.HellthreadPolicy`: Blocks messages with too many mentions.
(See [`mrf_hellthread`](#mrf_hellthread))
* `Pleroma.Web.ActivityPub.MRF.KeywordPolicy`: Rejects or removes from the federated timeline or replaces keywords. (See [`:mrf_keyword`](#mrf_keyword)).
* `Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy`: Crawls attachments using their MediaProxy URLs so that the MediaProxy cache is primed.
* `Pleroma.Web.ActivityPub.MRF.MentionPolicy`: Drops posts mentioning configurable users. (See [`:mrf_mention`](#mrf_mention)).
* `Pleroma.Web.ActivityPub.MRF.NoEmptyPolicy`: Drops local activities which have no actual content.
(e.g. no attachments and only consists of mentions)
* `Pleroma.Web.ActivityPub.MRF.NoPlaceholderTextPolicy`: Strips content placeholders from posts
(such as the dot from mastodon)
* `Pleroma.Web.ActivityPub.MRF.ObjectAgePolicy`: Rejects or delists posts based on their age when received. (See [`:mrf_object_age`](#mrf_object_age)).
* `Pleroma.Web.ActivityPub.MRF.RejectNewlyCreatedAccountNotesPolicy`: Rejects posts of users the server only recently learned about for a while. Great to block spam accounts. (See [`:mrf_reject_newly_created_account_notes`](#mrf_reject_newly_created_account_notes))
* `Pleroma.Web.ActivityPub.MRF.RejectNonPublic`: Drops posts with non-public visibility settings (See [`:mrf_rejectnonpublic`](#mrf_rejectnonpublic)).
* `Pleroma.Web.ActivityPub.MRF.SimplePolicy`: Restrict the visibility of activities from certains instances (See [`:mrf_simple`](#mrf_simple)).
* `Pleroma.Web.ActivityPub.MRF.StealEmojiPolicy`: Steals all eligible emoji encountered in posts from remote instances
(See [`:mrf_steal_emoji`](#mrf_steal_emoji))
* `Pleroma.Web.ActivityPub.MRF.SubchainPolicy`: Selectively runs other MRF policies when messages match (See [`:mrf_subchain`](#mrf_subchain)).
* `Pleroma.Web.ActivityPub.MRF.TagPolicy`: Applies policies to individual users based on tags, which can be set using pleroma-fe/admin-fe/any other app that supports Pleroma Admin API. For example it allows marking posts from individual users nsfw (sensitive).
* `Pleroma.Web.ActivityPub.MRF.UserAllowListPolicy`: Drops all posts except from users specified in a list.
(See [`:mrf_user_allowlist`](#mrf_user_allowlist))
* `Pleroma.Web.ActivityPub.MRF.VocabularyPolicy`: Restricts activities to a configured set of vocabulary. (See [`:mrf_vocabulary`](#mrf_vocabulary)).
Additionally the following MRFs will *always* be aplied and cannot be disabled:
* `Pleroma.Web.ActivityPub.MRF.DirectMessageDisabledPolicy`: Strips users limiting who can send them DMs from the recipients of non-eligible DMs
* `Pleroma.Web.ActivityPub.MRF.HashtagPolicy`: Depending on a posts hashtags it can be rejected, get its sensitive flags force-enabled or removed from the global timeline
(See [`:mrf_hashtag`](#mrf_hashtag))
* `Pleroma.Web.ActivityPub.MRF.InlineQuotePolicy`: Append a link to a post that quotes another post with the link to the quoted post, to ensure that software that does not understand quotes can have full context.
(See [`:mrf_inline_quote`](#mrf_inline_quote))
* `Pleroma.Web.ActivityPub.MRF.NormalizeMarkup`: Pass inbound HTML through a scrubber to make sure it doesn't have anything unusual in it.
(See [`:mrf_normalize_markup`](#mrf_normalize_markup))
## Federation ## Federation
### :activitypub
* `unfollow_blocked`: Whether blocks result in people getting unfollowed
* `outgoing_blocks`: Whether to federate blocks to other instances
* `blockers_visible`: Whether a user can see the posts of users who blocked them
* `deny_follow_blocked`: Whether to disallow following an account that has blocked the user in question
* `sign_object_fetches`: Sign object fetches with HTTP signatures
* `authorized_fetch_mode`: Require HTTP signatures for AP fetches
* `max_collection_objects`: The maximum number of objects to fetch from a remote AP collection.
### MRF policies ### MRF policies
!!! note !!! note
@ -144,6 +175,7 @@ To add configuration to your config file, you can copy it from the base config.
* `report_removal`: List of instances to reject reports from and the reason for doing so. * `report_removal`: List of instances to reject reports from and the reason for doing so.
* `avatar_removal`: List of instances to strip avatars from and the reason for doing so. * `avatar_removal`: List of instances to strip avatars from and the reason for doing so.
* `banner_removal`: List of instances to strip banners from and the reason for doing so. * `banner_removal`: List of instances to strip banners from and the reason for doing so.
* `background_removal`: List of instances to strip user backgrounds from and the reason for doing so.
* `reject_deletes`: List of instances to reject deletions from and the reason for doing so. * `reject_deletes`: List of instances to reject deletions from and the reason for doing so.
#### :mrf_subchain #### :mrf_subchain
@ -206,7 +238,9 @@ config :pleroma, :mrf_user_allowlist, %{
#### :mrf_steal_emoji #### :mrf_steal_emoji
* `hosts`: List of hosts to steal emojis from * `hosts`: List of hosts to steal emojis from
* `rejected_shortcodes`: Regex-list of shortcodes to reject * `rejected_shortcodes`: Regex-list of shortcodes to reject
* `size_limit`: File size limit (in bytes), checked before an emoji is saved to the disk * `size_limit`: File size limit (in bytes), checked before download if possible (and remote server honest),
otherwise or again checked before saving emoji to the disk
* `download_unknown_size`: whether to download an emoji when the remote server doesnt report its size in advance
#### :mrf_activity_expiration #### :mrf_activity_expiration
@ -222,14 +256,24 @@ Notes:
- The hashtags in the configuration do not have a leading `#`. - The hashtags in the configuration do not have a leading `#`.
- This MRF Policy is always enabled, if you want to disable it you have to set empty lists - This MRF Policy is always enabled, if you want to disable it you have to set empty lists
### :activitypub #### :mrf_reject_newly_created_account_notes
* `unfollow_blocked`: Whether blocks result in people getting unfollowed After initially encountering an user, all their posts
* `outgoing_blocks`: Whether to federate blocks to other instances will be rejected for the configured time (in seconds).
* `blockers_visible`: Whether a user can see the posts of users who blocked them Only drops posts. Follows, reposts, etc. are not affected.
* `deny_follow_blocked`: Whether to disallow following an account that has blocked the user in question
* `sign_object_fetches`: Sign object fetches with HTTP signatures * `age`: Time below which to reject (in seconds)
* `authorized_fetch_mode`: Require HTTP signatures for AP fetches
* `max_collection_objects`: The maximum number of objects to fetch from a remote AP collection. An example: (86400 seconds = 24 hours)
```elixir
config :pleroma, :mrf_reject_newly_created_account_notes, age: 86400
```
#### :mrf_inline_quote
* `prefix`: what prefix to prepend to quoted URLs
#### :mrf_normalize_markup
* `scrub_policy`: the scrubbing module to use (by default a built-in HTML sanitiser)
## Pleroma.User ## Pleroma.User
@ -356,7 +400,8 @@ This section describe PWA manifest instance-specific values. Currently this opti
## :media_proxy ## :media_proxy
* `enabled`: Enables proxying of remote media to the instances proxy * `enabled`: Enables proxying of remote media to the instances proxy
* `base_url`: The base URL to access a user-uploaded file. Useful when you want to proxy the media files via another host/CDN fronts. * `base_url`: The base URL to access a user-uploaded file.
Using a (sub)domain distinct from the instance endpoint is **strongly** recommended.
* `proxy_opts`: All options defined in `Pleroma.ReverseProxy` documentation, defaults to `[max_body_length: (25*1_048_576)]`. * `proxy_opts`: All options defined in `Pleroma.ReverseProxy` documentation, defaults to `[max_body_length: (25*1_048_576)]`.
* `whitelist`: List of hosts with scheme to bypass the mediaproxy (e.g. `https://example.com`) * `whitelist`: List of hosts with scheme to bypass the mediaproxy (e.g. `https://example.com`)
* `invalidation`: options for remove media from cache after delete object: * `invalidation`: options for remove media from cache after delete object:
@ -375,11 +420,6 @@ This section describe PWA manifest instance-specific values. Currently this opti
#### Pleroma.Web.MediaProxy.Invalidation.Script #### Pleroma.Web.MediaProxy.Invalidation.Script
!!! warning
Invalidation script options cannot be set in the database due to the ability to
set the command options to arbitrary paths. The following options **MUST** be
set in your `.exs` file instead.
This strategy allow perform external shell script to purge cache. This strategy allow perform external shell script to purge cache.
Urls of attachments are passed to the script as arguments. Urls of attachments are passed to the script as arguments.
@ -562,9 +602,9 @@ the source code is here: [kocaptcha](https://github.com/koto-bank/kocaptcha). Th
* `uploader`: Which one of the [uploaders](#uploaders) to use. * `uploader`: Which one of the [uploaders](#uploaders) to use.
* `filters`: List of [upload filters](#upload-filters) to use. * `filters`: List of [upload filters](#upload-filters) to use.
* `link_name`: When enabled Akkoma will add a `name` parameter to the url of the upload, for example `https://instance.tld/media/corndog.png?name=corndog.png`. This is needed to provide the correct filename in Content-Disposition headers when using filters like `Pleroma.Upload.Filter.Dedupe` * `link_name`: When enabled Akkoma will add a `name` parameter to the url of the upload, for example `https://instance.tld/media/corndog.png?name=corndog.png`. This is needed to provide the correct filename in Content-Disposition headers
* `base_url`: The base URL to access a user-uploaded file. Useful when you want to host the media files via another domain or are using a 3rd party S3 provider. * `base_url`: The base URL to access a user-uploaded file; MUST be configured explicitly.
* `proxy_remote`: If you're using a remote uploader, Akkoma will proxy media requests instead of redirecting to it. Using a (sub)domain distinct from the instance endpoint is **strongly** recommended. A good value might be `https://media.myakkoma.instance/media/`.
* `proxy_opts`: Proxy options, see `Pleroma.ReverseProxy` documentation. * `proxy_opts`: Proxy options, see `Pleroma.ReverseProxy` documentation.
* `filename_display_max_length`: Set max length of a filename to display. 0 = no limit. Default: 30. * `filename_display_max_length`: Set max length of a filename to display. 0 = no limit. Default: 30.
@ -603,20 +643,29 @@ config :ex_aws, :s3,
### Upload filters ### Upload filters
#### Pleroma.Upload.Filter.Dedupe
**Always** active; cannot be turned off.
Renames files to their hash and prevents duplicate files filling up the disk.
No specific configuration.
#### Pleroma.Upload.Filter.AnonymizeFilename #### Pleroma.Upload.Filter.AnonymizeFilename
This filter replaces the filename (not the path) of an upload. For complete obfuscation, add This filter replaces the declared filename (not the path) of an upload.
`Pleroma.Upload.Filter.Dedupe` before AnonymizeFilename.
* `text`: Text to replace filenames in links. If empty, `{random}.extension` will be used. You can get the original filename extension by using `{extension}`, for example `custom-file-name.{extension}`. * `text`: Text to replace filenames in links. If empty, `{random}.extension` will be used. You can get the original filename extension by using `{extension}`, for example `custom-file-name.{extension}`.
#### Pleroma.Upload.Filter.Dedupe #### Pleroma.Upload.Filter.Exiftool.StripMetadata
No specific configuration. This filter strips metadata with Exiftool leaving color profiles and orientation intact.
#### Pleroma.Upload.Filter.Exiftool * `purge`: List of Exiftool tag names or tag group names to purge
* `preserve`: List of Exiftool tag names or tag group names to preserve even if they occur in the purge list
This filter only strips the GPS and location metadata with Exiftool leaving color profiles and attributes intact.
#### Pleroma.Upload.Filter.Exiftool.ReadDescription
This filter reads the ImageDescription and iptc:Caption-Abstract fields with Exiftool so clients can prefill the media description field.
No specific configuration. No specific configuration.
@ -963,6 +1012,15 @@ config :ueberauth, Ueberauth,
] ]
``` ```
You may also need to set up your frontend to use oauth logins. For example, for `akkoma-fe`:
```elixir
config :pleroma, :frontend_configurations,
pleroma_fe: %{
loginMethod: "token"
}
```
## Link parsing ## Link parsing
### :uri_schemes ### :uri_schemes
@ -1016,6 +1074,21 @@ git clone <MY MODULE>
Boolean, enables/disables in-database configuration. Read [Transfering the config to/from the database](../administration/CLI_tasks/config.md) for more information. Boolean, enables/disables in-database configuration. Read [Transfering the config to/from the database](../administration/CLI_tasks/config.md) for more information.
## :database_config_whitelist
List of valid configuration sections which are allowed to be configured from the
database. Settings stored in the database before the whitelist is configured are
still applied, so it is suggested to only use the whitelist on instances that
have not migrated the config to the database.
Example:
```elixir
config :pleroma, :database_config_whitelist, [
{:pleroma, :instance},
{:pleroma, Pleroma.Web.Metadata},
{:auto_linker}
]
```
### Multi-factor authentication - :two_factor_authentication ### Multi-factor authentication - :two_factor_authentication
* `totp` - a list containing TOTP configuration * `totp` - a list containing TOTP configuration
@ -1138,11 +1211,6 @@ Translations are available at `/api/v1/statuses/:id/translations/:language`, whe
### `:argos_translate` ### `:argos_translate`
!!! warning
Argos Translate options cannot be set in the database due to the ability to
set the command options to arbitrary paths. The following options **MUST** be
set in your `.exs` file instead.
- `:command_argos_translate` - command for `argos-translate`. Can be the command if it's in your PATH, or the full path to the file (default: `argos-translate`). - `:command_argos_translate` - command for `argos-translate`. Can be the command if it's in your PATH, or the full path to the file (default: `argos-translate`).
- `:command_argospm` - command for `argospm`. Can be the command if it's in your PATH, or the full path to the file (default: `argospm`). - `:command_argospm` - command for `argospm`. Can be the command if it's in your PATH, or the full path to the file (default: `argospm`).
- `:strip_html` - Strip html from the post before translating it (default: `true`). - `:strip_html` - Strip html from the post before translating it (default: `true`).

View file

@ -17,6 +17,16 @@ This sets the Akkoma application server to only listen to the localhost interfac
This sets the `secure` flag on Akkomas session cookie. This makes sure, that the cookie is only accepted over encrypted HTTPs connections. This implicitly renames the cookie from `pleroma_key` to `__Host-pleroma-key` which enforces some restrictions. (see [cookie prefixes](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#Cookie_prefixes)) This sets the `secure` flag on Akkomas session cookie. This makes sure, that the cookie is only accepted over encrypted HTTPs connections. This implicitly renames the cookie from `pleroma_key` to `__Host-pleroma-key` which enforces some restrictions. (see [cookie prefixes](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie#Cookie_prefixes))
### `Pleroma.Upload, :uploader, :base_url`
> Recommended value: *anything on a different domain than the instance endpoint; e.g. https://media.myinstance.net/*
Uploads are user controlled and (unless youre running a true single-user
instance) should therefore not be considered trusted. But the domain is used
as a pivilege boundary e.g. by HTTP content security policy and ActivityPub.
Having uploads on the same domain enabled several past vulnerabilities
able to be exploited by malicious users.
### `:http_security` ### `:http_security`
> Recommended value: `true` > Recommended value: `true`

View file

@ -6,29 +6,18 @@ With the `mediaproxy` function you can use nginx to cache this content, so users
## Activate it ## Activate it
* Edit your nginx config and add the following location: * Set up a subdomain for the proxy with its nginx config on the same machine
``` * Edit the nginx config for the upload/MediaProxy subdomain to point to the subdomain that has been set up
location /proxy {
proxy_cache akkoma_media_cache;
proxy_cache_lock on;
proxy_pass http://localhost:4000;
}
```
Also add the following on top of the configuration, outside of the `server` block:
```
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g inactive=720m use_temp_path=off;
```
If you came here from one of the installation guides, take a look at the example configuration `/installation/nginx/akkoma.nginx`, where this part is already included.
* Append the following to your `prod.secret.exs` or `dev.secret.exs` (depends on which mode your instance is running): * Append the following to your `prod.secret.exs` or `dev.secret.exs` (depends on which mode your instance is running):
``` ```elixir
# Replace media.example.td with the subdomain you set up earlier
config :pleroma, :media_proxy, config :pleroma, :media_proxy,
enabled: true, enabled: true,
proxy_opts: [ proxy_opts: [
redirect_on_failure: true redirect_on_failure: true
] ],
#base_url: "https://cache.akkoma.social" base_url: "https://media.example.tld"
``` ```
If you want to use a subdomain to serve the files, uncomment `base_url`, change the url and add a comma after `true` in the previous line. You **really** should use a subdomain to serve proxied files; while we will fix bugs resulting from this, serving arbitrary remote content on your main domain namespace is a significant attack surface.
* Restart nginx and Akkoma * Restart nginx and Akkoma

View file

@ -60,7 +60,7 @@ Example of `my-awesome-theme.json` where we add the name "My Awesome Theme"
### Set as default theme ### Set as default theme
Now we can set the new theme as default in the [Pleroma FE configuration](https://docs-fe.akkoma.dev/stable/CONFIGURATION). Now we can set the new theme as default in the [Pleroma FE configuration](https://docs-fe.akkoma.dev/stable/CONFIGURATION/).
Example of adding the new theme in the back-end config files Example of adding the new theme in the back-end config files
```elixir ```elixir

View file

@ -130,59 +130,26 @@ config :pleroma, :http_security,
enabled: false enabled: false
``` ```
Use this as the Nginx config: In the Nginx config, add the following into the `location /` block:
``` ```nginx
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g inactive=720m use_temp_path=off;
# The above already exists in a clearnet instance's config.
# If not, add it.
server {
listen 127.0.0.1:14447;
server_name youri2paddress;
# Comment to enable logs
access_log /dev/null;
error_log /dev/null;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/activity+json application/atom+xml;
client_max_body_size 16m;
location / {
add_header X-XSS-Protection "0"; add_header X-XSS-Protection "0";
add_header X-Permitted-Cross-Domain-Policies none; add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options DENY; add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff; add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy same-origin; add_header Referrer-Policy same-origin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_pass http://localhost:4000;
client_max_body_size 16m;
}
location /proxy {
proxy_cache akkoma_media_cache;
proxy_cache_lock on;
proxy_ignore_client_abort on;
proxy_pass http://localhost:4000;
}
}
``` ```
reload Nginx:
Change the `listen` directive to the following:
```nginx
listen 127.0.0.1:14447;
``` ```
systemctl stop i2pd.service --no-block
systemctl start i2pd.service Set `server_name` to your i2p address.
Reload Nginx:
```
systemctl restart i2pd.service --no-block
systemctl reload nginx.service
``` ```
*Notice:* The stop command initiates a graceful shutdown process, i2pd stops after finishing to route transit tunnels (maximum 10 minutes). *Notice:* The stop command initiates a graceful shutdown process, i2pd stops after finishing to route transit tunnels (maximum 10 minutes).

View file

@ -35,6 +35,7 @@ Once `SimplePolicy` is enabled, you can configure various groups in the `:mrf_si
* `media_removal`: Servers in this group will have media stripped from incoming messages. * `media_removal`: Servers in this group will have media stripped from incoming messages.
* `avatar_removal`: Avatars from these servers will be stripped from incoming messages. * `avatar_removal`: Avatars from these servers will be stripped from incoming messages.
* `banner_removal`: Banner images from these servers will be stripped from incoming messages. * `banner_removal`: Banner images from these servers will be stripped from incoming messages.
* `background_removal`: User background images from these servers will be stripped from incoming messages.
* `report_removal`: Servers in this group will have their reports (flags) rejected. * `report_removal`: Servers in this group will have their reports (flags) rejected.
* `federated_timeline_removal`: Servers in this group will have their messages unlisted from the public timelines by flipping the `to` and `cc` fields. * `federated_timeline_removal`: Servers in this group will have their messages unlisted from the public timelines by flipping the `to` and `cc` fields.
* `reject_deletes`: Deletion requests will be rejected from these servers. * `reject_deletes`: Deletion requests will be rejected from these servers.
@ -61,6 +62,32 @@ config :pleroma, :mrf_simple,
The effects of MRF policies can be very drastic. It is important to use this functionality carefully. Always try to talk to an admin before writing an MRF policy concerning their instance. The effects of MRF policies can be very drastic. It is important to use this functionality carefully. Always try to talk to an admin before writing an MRF policy concerning their instance.
## Hiding or Obfuscating Policies
You can opt out of publicly displaying all MRF policies or only hide or obfuscate selected domains.
To just hide everything set:
```elixir
config :pleroma, :mrf,
...
transparency: false,
```
To hide or obfuscate only select entries, use:
```elixir
config :pleroma, :mrf,
...
transparency_obfuscate_domains: ["handholdi.ng", "badword.com"],
transparency_exclusions: [{"ghost.club", "even a fragment is too spoopy for humans"}]
```
## More MRF Policies
See the [documentation cheatsheet](cheatsheet.md)
for all available MRF policies and their options.
## Writing your own MRF Policy ## Writing your own MRF Policy
As discussed above, the MRF system is a modular system that supports pluggable policies. This means that an admin may write a custom MRF policy in Elixir or any other language that runs on the Erlang VM, by specifying the module name in the `policies` config setting. As discussed above, the MRF system is a modular system that supports pluggable policies. This means that an admin may write a custom MRF policy in Elixir or any other language that runs on the Erlang VM, by specifying the module name in the `policies` config setting.

View file

@ -74,56 +74,23 @@ config :pleroma, :http_security,
enabled: false enabled: false
``` ```
Use this as the Nginx config: In the Nginx config, add the following into the `location /` block:
``` ```nginx
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g inactive=720m use_temp_path=off;
# The above already exists in a clearnet instance's config.
# If not, add it.
server {
listen 127.0.0.1:8099;
server_name youronionaddress;
# Comment to enable logs
access_log /dev/null;
error_log /dev/null;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/activity+json application/atom+xml;
client_max_body_size 16m;
location / {
add_header X-XSS-Protection "0"; add_header X-XSS-Protection "0";
add_header X-Permitted-Cross-Domain-Policies none; add_header X-Permitted-Cross-Domain-Policies none;
add_header X-Frame-Options DENY; add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff; add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy same-origin; add_header Referrer-Policy same-origin;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_pass http://localhost:4000;
client_max_body_size 16m;
}
location /proxy {
proxy_cache akkoma_media_cache;
proxy_cache_lock on;
proxy_ignore_client_abort on;
proxy_pass http://localhost:4000;
}
}
``` ```
reload Nginx:
Change the `listen` directive to the following:
```nginx
listen 127.0.0.1:8099;
```
Set the `server_name` to your onion address.
Reload Nginx:
``` ```
systemctl reload nginx systemctl reload nginx
``` ```

View file

@ -25,11 +25,14 @@ Tuning the BEAM requires you provide a config file normally called [vm.args](htt
`ExecStart=/usr/bin/elixir --erl '-args_file /opt/akkoma/config/vm.args' -S /usr/bin/mix phx.server` `ExecStart=/usr/bin/elixir --erl '-args_file /opt/akkoma/config/vm.args' -S /usr/bin/mix phx.server`
If using an OTP release, set the `RELEASE_VM_ARGS` environment variable to the path to the vm.args file.
Check your OS documentation to adopt a similar strategy on other platforms. Check your OS documentation to adopt a similar strategy on other platforms.
### Virtual Machine and/or few CPU cores ### Virtual Machine and/or few CPU cores
Disable the busy-waiting. This should generally only be done if you're on a platform that does burst scheduling, like AWS. Disable the busy-waiting. This should generally be done if you're on a platform that does burst scheduling, like AWS, or if you're running other
services on the same machine.
**vm.args:** **vm.args:**
@ -39,6 +42,8 @@ Disable the busy-waiting. This should generally only be done if you're on a plat
+sbwtdio none +sbwtdio none
``` ```
These settings are enabled by default for OTP releases
### Dedicated Hardware ### Dedicated Hardware
Enable more busy waiting, increase the internal maximum limit of BEAM processes and ports. You can use this if you run on dedicated hardware, but it is not necessary. Enable more busy waiting, increase the internal maximum limit of BEAM processes and ports. You can use this if you run on dedicated hardware, but it is not necessary.

View file

@ -4,47 +4,10 @@ Akkoma performance is largely dependent on performance of the underlying databas
## PGTune ## PGTune
[PgTune](https://pgtune.leopard.in.ua) can be used to get recommended settings. Be sure to set "Number of Connections" to 20, otherwise it might produce settings hurtful to database performance. It is also recommended to not use "Network Storage" option. [PgTune](https://pgtune.leopard.in.ua) can be used to get recommended settings. Make sure to set the DB type to "Online transaction processing system" for optimal performance. Also set the number of connections to between 25 and 30. This will allow each connection to have access to more resources while still leaving some room for running maintenance tasks while the instance is still running.
If your server runs other services, you may want to take that into account. E.g. if you have 4G ram, but 1G of it is already used for other services, it may be better to tell PGTune you only have 3G. In the end, PGTune only provides recomended settings, you can always try to finetune further. It is also recommended to not use "Network Storage" option.
### Example configurations If your server runs other services, you may want to take that into account. E.g. if you have 4G ram, but 1G of it is already used for other services, it may be better to tell PGTune you only have 3G.
Here are some configuration suggestions for PostgreSQL 10+. In the end, PGTune only provides recomended settings, you can always try to finetune further.
#### 1GB RAM, 1 CPU
```
shared_buffers = 256MB
effective_cache_size = 768MB
maintenance_work_mem = 64MB
work_mem = 13107kB
```
#### 2GB RAM, 2 CPU
```
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
work_mem = 26214kB
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
```
## Disable generic query plans
When PostgreSQL receives a query, it decides on a strategy for searching the requested data, this is called a query plan. The query planner has two modes: generic and custom. Generic makes a plan for all queries of the same shape, ignoring the parameters, which is then cached and reused. Custom, on the contrary, generates a unique query plan based on query parameters.
By default PostgreSQL has an algorithm to decide which mode is more efficient for particular query, however this algorithm has been observed to be wrong on some of the queries Akkoma sends, leading to serious performance loss. Therefore, it is recommended to disable generic mode.
Akkoma already avoids generic query plans by default, however the method it uses is not the most efficient because it needs to be compatible with all supported PostgreSQL versions. For PostgreSQL 12 and higher additional performance can be gained by adding the following to Akkoma configuration:
```elixir
config :pleroma, Pleroma.Repo,
prepare: :named,
parameters: [
plan_cache_mode: "force_custom_plan"
]
```
A more detailed explaination of the issue can be found at <https://blog.soykaf.com/post/postgresql-elixir-troubles/>.

View file

@ -33,6 +33,7 @@ indexes faster when it can process many posts in a single batch.
> config :pleroma, Pleroma.Search.Meilisearch, > config :pleroma, Pleroma.Search.Meilisearch,
> url: "http://127.0.0.1:7700/", > url: "http://127.0.0.1:7700/",
> private_key: "private key", > private_key: "private key",
> search_key: "search key",
> initial_indexing_chunk_size: 100_000 > initial_indexing_chunk_size: 100_000
Information about setting up meilisearch can be found in the Information about setting up meilisearch can be found in the
@ -45,7 +46,7 @@ is hardly usable on a somewhat big instance.
### Private key authentication (optional) ### Private key authentication (optional)
To set the private key, use the `MEILI_MASTER_KEY` environment variable when starting. After setting the _master key_, To set the private key, use the `MEILI_MASTER_KEY` environment variable when starting. After setting the _master key_,
you have to get the _private key_, which is actually used for authentication. you have to get the _private key_ and possibly _search key_, which are actually used for authentication.
=== "OTP" === "OTP"
```sh ```sh
@ -57,7 +58,11 @@ you have to get the _private key_, which is actually used for authentication.
mix pleroma.search.meilisearch show-keys <your master key here> mix pleroma.search.meilisearch show-keys <your master key here>
``` ```
You will see a "Default Admin API Key", this is the key you actually put into your configuration file. You will see a "Default Admin API Key", this is the key you actually put into
your configuration file as `private_key`. You should also see a
"Default Search API key", put this into your config as `search_key`.
If your version of Meilisearch only showed the former,
just leave `search_key` completely unset in Akkoma's config.
### Initial indexing ### Initial indexing

View file

@ -6,7 +6,7 @@ as soon as the post is received by your instance.
## Nginx ## Nginx
The following are excerpts from the [suggested nginx config](../../../installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work. The following are excerpts from the [suggested nginx config](https://akkoma.dev/AkkomaGang/akkoma/src/branch/develop/installation/nginx/akkoma.nginx) that demonstrates the necessary config for the media proxy to work.
A `proxy_cache_path` must be defined, for example: A `proxy_cache_path` must be defined, for example:

View file

@ -1033,7 +1033,6 @@ Most of the settings will be applied in `runtime`, this means that you don't nee
- `:pools` - `:pools`
- partially settings inside these keys: - partially settings inside these keys:
- `:seconds_valid` in `Pleroma.Captcha` - `:seconds_valid` in `Pleroma.Captcha`
- `:proxy_remote` in `Pleroma.Upload`
- `:upload_limit` in `:instance` - `:upload_limit` in `:instance`
- Params: - Params:
@ -1094,7 +1093,6 @@ List of settings which support only full update by subkey:
{"tuple": [":uploader", "Pleroma.Uploaders.Local"]}, {"tuple": [":uploader", "Pleroma.Uploaders.Local"]},
{"tuple": [":filters", ["Pleroma.Upload.Filter.Dedupe"]]}, {"tuple": [":filters", ["Pleroma.Upload.Filter.Dedupe"]]},
{"tuple": [":link_name", true]}, {"tuple": [":link_name", true]},
{"tuple": [":proxy_remote", false]},
{"tuple": [":proxy_opts", [ {"tuple": [":proxy_opts", [
{"tuple": [":redirect_on_failure", false]}, {"tuple": [":redirect_on_failure", false]},
{"tuple": [":max_body_length", 1048576]}, {"tuple": [":max_body_length", 1048576]},

View file

@ -0,0 +1,146 @@
# Akkoma API
Request authentication (if required) and parameters work the same as for [Pleroma API](pleroma_api.md).
## `/api/v1/akkoma/preferred_frontend/available`
### Returns the available frontends which can be picked as the preferred choice
* Method: `GET`
* Authentication: not required
* Params: none
* Response: JSON
* Example response:
```json
["pleroma-fe/stable"]
```
!!! note
Theres also a browser UI under `/akkoma/frontend`
for interactively querying and changing this.
## `/api/v1/akkoma/preferred_frontend`
### Configures the preferred frontend of this session
* Method: `PUT`
* Authentication: not required
* Params:
* `frontend_name`: STRING containing one of the available frontends
* Response: JSON
* Example response:
```json
{"frontend_name":"pleroma-fe/stable"}
```
!!! note
Theres also a browser UI under `/akkoma/frontend`
for interactively querying and changing this.
## `/api/v1/akkoma/metrics`
### Provides metrics for Prometheus to scrape
* Method: `GET`
* Authentication: required (admin:metrics)
* Params: none
* Response: text
* Example response:
```
# HELP pleroma_remote_users_total
# TYPE pleroma_remote_users_total gauge
pleroma_remote_users_total 25
# HELP pleroma_local_statuses_total
# TYPE pleroma_local_statuses_total gauge
pleroma_local_statuses_total 17
# HELP pleroma_domains_total
# TYPE pleroma_domains_total gauge
pleroma_domains_total 4
# HELP pleroma_local_users_total
# TYPE pleroma_local_users_total gauge
pleroma_local_users_total 3
...
```
## `/api/v1/akkoma/translation/languages`
### Returns available source and target languages for automated text translation
* Method: `GET`
* Authentication: required
* Params: none
* Response: JSON
* Example response:
```json
{
"source": [
{"code":"LV", "name":"Latvian"},
{"code":"ZH", "name":"Chinese (traditional)"},
{"code":"EN-US", "name":"English (American)"}
],
"target": [
{"code":"EN-GB", "name":"English (British)"},
{"code":"JP", "name":"Japanese"}
]
}
```
## `/api/v1/akkoma/frontend_settings/:frontend_name`
### Lists all configuration profiles of the selected frontend for the current user
* Method: `GET`
* Authentication: required
* Params: none
* Response: JSON
* Example response:
```json
[
{"name":"default","version":31}
]
```
## `/api/v1/akkoma/frontend_settings/:frontend_name/:profile_name`
### Returns the full selected frontend settings profile of the current user
* Method: `GET`
* Authentication: required
* Params: none
* Response: JSON
* Example response:
```json
{
"version": 31,
"settings": {
"streaming": true,
"conversationDisplay": "tree",
...
}
}
```
## `/api/v1/akkoma/frontend_settings/:frontend_name/:profile_name`
### Updates the frontend settings profile
* Method: `PUT`
* Authentication: required
* Params:
* `version`: INTEGER
* `settings`: JSON object containing the entire new settings
* Response: JSON
* Example response:
```json
{
"streaming": false,
"conversationDisplay": "tree",
...
}
```
!!! note
The `version` field must be increased by exactly one on each update
## `/api/v1/akkoma/frontend_settings/:frontend_name/:profile_name`
### Drops the specified frontend settings profile
* Method: `DELETE`
* Authentication: required
* Params: none
* Response: JSON
* Example response:
```json
{"deleted":"ok"}
```
## `/api/v1/timelines/bubble`
### Returns a timeline for the local and closely related instances
Works like all other Mastodon-API timeline queries with the documented
[Akkoma-specific additions and tweaks](./differences_in_mastoapi_responses.md#timelines).

View file

@ -1,6 +1,6 @@
# Differences in Mastodon API responses from vanilla Mastodon # Differences in Mastodon API responses from vanilla Mastodon
A Akkoma instance can be identified by "<Mastodon version> (compatible; Pleroma <version>)" present in `version` field in response from `/api/v1/instance` A Akkoma instance can be identified by "<Mastodon version> (compatible; Akkoma <version>)" present in `version` field in response from `/api/v1/instance`
## Flake IDs ## Flake IDs
@ -8,20 +8,28 @@ Akkoma uses 128-bit ids as opposed to Mastodon's 64 bits. However, just like Mas
## Timelines ## Timelines
In addition to Mastodons timelines, there is also a “bubble timeline” showing
posts from the local instance and a set of closely related instances as chosen
by the administrator. It is available under `/api/v1/timelines/bubble`.
Adding the parameter `with_muted=true` to the timeline queries will also return activities by muted (not by blocked!) users. Adding the parameter `with_muted=true` to the timeline queries will also return activities by muted (not by blocked!) users.
Adding the parameter `exclude_visibilities` to the timeline queries will exclude the statuses with the given visibilities. The parameter accepts an array of visibility types (`public`, `unlisted`, `private`, `direct`), e.g., `exclude_visibilities[]=direct&exclude_visibilities[]=private`. Adding the parameter `exclude_visibilities` to the timeline queries will exclude the statuses with the given visibilities. The parameter accepts an array of visibility types (`public`, `unlisted`, `private`, `direct`), e.g., `exclude_visibilities[]=direct&exclude_visibilities[]=private`.
Adding the parameter `reply_visibility` to the public and home timelines queries will filter replies. Possible values: without parameter (default) shows all replies, `following` - replies directed to you or users you follow, `self` - replies directed to you. Adding the parameter `reply_visibility` to the public, bubble or home timelines queries will filter replies. Possible values: without parameter (default) shows all replies, `following` - replies directed to you or users you follow, `self` - replies directed to you.
Adding the parameter `instance=lain.com` to the public timeline will show only statuses originating from `lain.com` (or any remote instance). Adding the parameter `instance=lain.com` to the public timeline will show only statuses originating from `lain.com` (or any remote instance).
Home, public, hashtag & list timelines accept these parameters: All but the direct timeline accept these parameters:
- `only_media`: show only statuses with media attached - `only_media`: show only statuses with media attached
- `local`: show only local statuses
- `remote`: show only remote statuses - `remote`: show only remote statuses
Home, public, hashtag & list timelines further accept:
- `local`: show only local statuses
## Statuses ## Statuses
- `visibility`: has additional possible values `list` and `local` (for local-only statuses) - `visibility`: has additional possible values `list` and `local` (for local-only statuses)
@ -113,6 +121,12 @@ Has these additional fields under the `pleroma` object:
- `notification_settings`: object, can be absent. See `/api/v1/pleroma/notification_settings` for the parameters/keys returned. - `notification_settings`: object, can be absent. See `/api/v1/pleroma/notification_settings` for the parameters/keys returned.
- `favicon`: nullable URL string, Favicon image of the user's instance - `favicon`: nullable URL string, Favicon image of the user's instance
Has these additional fields under the `akkoma` object:
- `instance`: nullable object with metadata about the users instance
- `status_ttl_days`: nullable int, default time after which statuses are deleted
- `permit_followback`: boolean, whether follows from followed accounts are auto-approved
### Source ### Source
Has these additional fields under the `pleroma` object: Has these additional fields under the `pleroma` object:

View file

@ -4,7 +4,6 @@
The following endpoints are additionally present into our actors. The following endpoints are additionally present into our actors.
- `oauthRegistrationEndpoint` (`http://litepub.social/ns#oauthRegistrationEndpoint`) - `oauthRegistrationEndpoint` (`http://litepub.social/ns#oauthRegistrationEndpoint`)
- `uploadMedia` (`https://www.w3.org/ns/activitystreams#uploadMedia`)
### oauthRegistrationEndpoint ### oauthRegistrationEndpoint
@ -12,6 +11,279 @@ Points to MastodonAPI `/api/v1/apps` for now.
See <https://docs.joinmastodon.org/methods/apps/> See <https://docs.joinmastodon.org/methods/apps/>
## Emoji reactions
Emoji reactions are implemented as a new activity type `EmojiReact`.
A single user is allowed to react multiple times with different emoji to the
same post. However, they may only react at most once with the same emoji.
Repeated reaction from the same user with the same emoji are to be ignored.
Emoji reactions are also distinct from `Like` activities and a user may both
`Like` and react to a post.
!!! note
Misskey also supports emoji reactions, but the implementations differs.
It equates likes and reactions and only allows a single reaction per post.
The emoji is placed in the `content` field of the activity
and the `object` property points to the note reacting to.
Emoji can either be any Unicode emoji sequence or a custom emoji.
The latter must place their shortcode, including enclosing colons,
into `content` and put the emoji object inside the `tag` property.
The `tag` property MAY be omitted for Unicode emoji.
An example reaction with a Unicode emoji:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "EmojiReact",
"id": "https://example.org/activities/23143872a0346141",
"actor": "https://example.org/users/akko",
"nickname": "akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"content": "🧡",
"object": "https://remote.example/objects/9f0e93499d8314a9"
}
```
An example reaction with a custom emoji:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "EmojiReact",
"id": "https://example.org/activities/d75586dec0541650",
"actor": "https://example.org/users/akko",
"nickname": "akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"content": ":mouse:",
"object": "https://remote.example/objects/9f0e93499d8314a9",
"tag": [{
"type": "Emoji",
"id": null,
"name": "mouse",
"icon": {
"type": "Image",
"url": "https://example.org/emoji/mouse/mouse.png"
}
}]
}
```
!!! note
Although an emoji reaction can only contain a single emoji,
for compatibility with older versions of Pleroma and Akkoma,
it is recommended to wrap the emoji object in a single-element array.
When reacting with a remote custom emoji do not include the remote domain in `content`s shortcode
*(unlike in our REST API which needs the domain)*:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "EmojiReact",
"id": "https://example.org/activities/7993dcae98d8d5ec",
"actor": "https://example.org/users/akko",
"nickname": "akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"content": ":hug:",
"object": "https://remote.example/objects/9f0e93499d8314a9",
"tag": [{
"type": "Emoji",
"id": "https://other.example/emojis/hug",
"name": "hug",
"icon": {
"type": "Image",
"url": "https://other.example/files/b71cea432b3fad67.webp"
}
}]
}
```
Emoji reactions can be retracted using a standard `Undo` activity:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"http://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "Undo",
"id": "http://example.org/activities/4685792e-efb6-4309-b508-ae4f355dd695",
"actor": "https://example.org/users/akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"object": "https://example.org/activities/23143872a0346141"
}
```
## User profile backgrounds
Akkoma federates user profile backgrounds the same way as Sharkey.
An actors ActivityPub representation contains an additional
`backgroundUrl` property containing an `Image` object. This property
belongs to the `"sharkey": "https://joinsharkey.org/ns#"` namespace.
## Quote Posts
Akkoma allows referencing a single other note as a quote,
which will be prominently displayed in the interface.
The quoted post is referenced by its ActivityPub id in the `quoteUri` property.
!!! note
Old Misskey only understood and modern Misskey still prefers
the `_misskey_quote` property for this. Similar some other older
software used `quoteUrl` or `quoteURL`.
All current implementations with quote support understand `quoteUri`.
Example:
```json
{
"@context": [
"https://www.w3.org/ns/activitystreams",
"https://example.org/schemas/litepub-0.1.jsonld",
{
"@language": "und"
}
],
"type": "Note",
"id": "https://example.org/activities/85717e587f95d5c0",
"actor": "https://example.org/users/akko",
"to": ["https://remote.example/users/diana", "https://example.org/users/akko/followers"],
"cc": ["https://www.w3.org/ns/activitystreams#Public"],
"context": "https://example.org/contexts/1",
"content": "Look at that!",
"quoteUri": "http://remote.example/status/85717e587f95d5c0",
"contentMap": {
"en": "Look at that!"
},
"source": {
"content": "Look at that!",
"mediaType": "text/plain"
},
"published": "2024-04-06T23:40:28Z",
"updated": "2024-04-06T23:40:28Z",
"attachemnt": [],
"tag": []
}
```
## Threads
Akkoma assigns all posts of the same thread the same `context`. This is a
standard ActivityPub property but its meaning is left vague. Akkoma will
always treat posts with identical `context` as part of the same thread.
`context` must not be assumed to hold any meaning or be dereferencable.
Incoming posts without `context` will be assigned a new context.
!!! note
Mastodon uses the non-standard `conversation` property for the same purpose
*(named after an older OStatus property)*. For incoming posts without
`context` but with `converstions` Akkoma will use the value from
`conversations` to fill in `context`.
For outgoing posts Akkoma will duplicate the context into `conversation`.
## Post Source
Unlike Mastodon, Akkoma supports drafting posts in multiple source formats
besides plaintext, like Markdown or MFM. The original input is preserved
in the standard ActivityPub `source` property *(not supported by Mastodon)*.
Still, `content` will always be present and contain the prerendered HTML form.
Supported `mediaType` include:
- `text/plain`
- `text/markdown`
- `text/bbcode`
- `text/x.misskeymarkdown`
## Post Language
!!! note
This is also supported in and compatible with Mastodon, but since
joinmastodon.org doesnt document it yet it is included here.
[GoToSocial](https://docs.gotosocial.org/en/latest/federation/federating_with_gotosocial/#content-contentmap-and-language)
has a more refined version of this which can correctly deal with multiple language entries.
A post can indicate its language by including a `contentMap` object
which contains a sub key named after the languages ISO 639-1 code
and its content identical to the posts `content` field.
Currently Akkoma, just like Mastodon, only properly supports a single language entry,
in case of multiple entries a random language will be picked.
Furthermore, Akkoma currently only reads the `content` field
and never the value from `contentMap`.
## Local post scope
Post using this scope will never federate to other servers
but for the sake of completeness it is listed here.
In addition to the usual scopes *(public, unlisted, followers-only, direct)*
Akkoma supports an “unlisted” post scope. Such posts will not federate to
other instances and only be shown to logged-in users on the same instance.
It is included into the local timeline.
This may be useful to discuss or announce instance-specific policies and topics.
A post is addressed to the local scope by including `<base url of instance>/#Public`
in its `to` field. E.g. if the instance is on `https://example.org` it would use
`https://example.org/#Public`.
An implementation creating a new post MUST NOT address both the local and
general public scope `as:Public` at the same time. A post addressing the local
scope MUST NOT be sent to other instances or be possible to fetch by other
instances regardless of potential other listed addressees.
When receiving a remote post addressing both the public scope and what appears
to be a local-scope identifier, the post SHOULD be treated without assigning any
special meaning to the potential local-scope identifier.
!!! note
Misskey-derivatives have a similar concept of non-federated posts,
however those are also shown publicly on the local web interface
and are thus visible to non-members.
## List post scope
Messages originally addressed to a custom list will contain
a `listMessage` field with an unresolvable pseudo ActivityPub id.
# Deprecated and Removed Extensions
The following extensions were used in the past but have been dropped.
Documentation is retained here as a reference and since old objects might
still contains related fields.
## Actor endpoints
The following endpoints used to be present:
- `uploadMedia` (`https://www.w3.org/ns/activitystreams#uploadMedia`)
### uploadMedia ### uploadMedia
Inspired by <https://www.w3.org/wiki/SocialCG/ActivityPub/MediaUpload>, it is part of the ActivityStreams namespace because it used to be part of the ActivityPub specification and got removed from it. Inspired by <https://www.w3.org/wiki/SocialCG/ActivityPub/MediaUpload>, it is part of the ActivityStreams namespace because it used to be part of the ActivityPub specification and got removed from it.
@ -20,9 +292,8 @@ Content-Type: multipart/form-data
Parameters: Parameters:
- (required) `file`: The file being uploaded - (required) `file`: The file being uploaded
- (optionnal) `description`: A plain-text description of the media, for accessibility purposes. - (optional) `description`: A plain-text description of the media, for accessibility purposes.
Response: HTTP 201 Created with the object into the body, no `Location` header provided as it doesn't have an `id` Response: HTTP 201 Created with the object into the body, no `Location` header provided as it doesn't have an `id`
The object given in the reponse should then be inserted into an Object's `attachment` field. The object given in the response should then be inserted into an Object's `attachment` field.

View file

@ -0,0 +1,141 @@
# Nodeinfo Extensions
Akkoma currently implements version 2.0 and 2.1 of nodeinfo spec,
but provides the following additional fields.
## metadata
The spec leaves the content of `metadata` up to implementations
and indeed Akkoma adds many fields here apart from the commonly
found `nodeName` and `nodeDescription` fields.
### accountActivationRequired
Whether or not users need to confirm their email before completing registration.
*(boolean)*
!!! note
Not to be confused with account approval, where each registration needs to
be manually approved by an admin. Account approval has no nodeinfo entry.
### features
Array of strings denoting supported server features. E.g. a server supporting
quote posts should include a `"quote_posting"` entry here.
A non-exhaustive list of possible features:
- `polls`
- `quote_posting`
- `editing`
- `bubble_timeline`
- `pleroma_emoji_reactions` *(Unicode emoji)*
- `custom_emoji_reactions`
- `akkoma_api`
- `akkoma:machine_translation`
- `mastodon_api`
- `pleroma_api`
### federatedTimelineAvailable
Whether or not the “federated timeline”, i.e. a timeline containing posts from
the entire known network, is made available.
*(boolean)*
### federation
This section is optional and can contain various custom keys describing federation policies.
The following are required to be presented:
- `enabled` *(boolean)* whether the server federates at all
A non-exhaustive list of optional keys:
- `exclusions` *(boolean)* whether some federation policies are withheld
- `mrf_simple` *(object)* describes how the Simple MRF policy is configured
### fieldsLimits
A JSON object documenting restriction for user account info fields.
All properties are integers.
- `maxFields` maximum number of account info fields local users can create
- `maxRemoteFields` maximum number of account info fields remote users can have
before the user gets rejected or fields truncated
- `nameLength` maximum length of a fields name
- `valueLength` maximum length of a fields value
### invitesEnabled
Whether or not signing up via invite codes is possible.
*(boolean)*
### localBubbleInstances
Array of domains (as strings) of other instances chosen
by the admin which are shown in the bubble timeline.
### mailerEnabled
Whether or not the instance can send out emails.
*(boolean)*
### nodeDescription
Human-friendly description of this instance
*(string)*
### nodeName
Human-friendly name of this instance
*(string)*
### pollLimits
JSON object containing limits for polls created by local users.
All values are integers.
- `max_options` maximum number of poll options
- `max_option_chars` maximum characters per poll option
- `min_expiration` minimum time in seconds a poll must be open for
- `max_expiration` maximum time a poll is allowed to be open for
### postFormats
Array of strings containing media types for supported post source formats.
A non-exhaustive list of possible values:
- `text/plain`
- `text/markdown`
- `text/bbcode`
- `text/x.misskeymarkdown`
### private
Whether or not unauthenticated API access is permitted.
*(boolean)*
### privilegedStaff
Whether or not moderators are trusted to perform some
additional tasks like e.g. issuing password reset emails.
### publicTimelineVisibility
JSON object containing boolean-valued keys reporting
if a given timeline can be viewed without login.
- `local`
- `federated`
- `bubble`
### restrictedNicknames
Array of strings listing nicknames forbidden to be used during signup.
### skipThreadContainment
Whether broken threads are filtered out
*(boolean)*
### staffAccounts
Array containing ActivityPub IDs of local accounts
with some form of elevated privilege on the instance.
### suggestions
JSON object containing info on whether the interaction-based
Mastodon `/api/v1/suggestions` feature is enabled and optionally
additional implementation-defined fields with more details
on e.g. how suggested users are selected.
!!! note
This has no relation to the newer /api/v2/suggestions API
which also (or exclusively) contains staff-curated entries.
- `enabled` *(boolean)* whether or not user recommendations are enabled
### uploadLimits
JSON object documenting various upload-related size limits.
All values are integers and in bytes.
- `avatar` maximum size of uploaded user avatars
- `banner` maximum size of uploaded user profile banners
- `background` maximum size of uploaded user profile backgrounds
- `general` maximum size for all other kinds of uploads

View file

@ -145,47 +145,13 @@ If you want to open your newly installed instance to the world, you should run n
doas apk add nginx doas apk add nginx
``` ```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell
doas apk add certbot
```
and then set it up:
```shell
doas mkdir -p /var/lib/letsencrypt/
doas certbot certonly --email <your@emailaddress> -d <yourdomain> --standalone
```
If that doesnt work, make sure, that nginx is not already running. If it still doesnt work, try setting up nginx first (change ssl “on” to “off” and try again).
* Copy the example nginx configuration to the nginx folder * Copy the example nginx configuration to the nginx folder
```shell ```shell
doas cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/conf.d/akkoma.conf doas cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/conf.d/akkoma.conf
``` ```
* Before starting nginx edit the configuration and change it to your needs. You must change change `server_name` and the paths to the certificates. You can use `nano` (install with `apk add nano` if missing). * Before starting nginx edit the configuration and change it to your needs. You must change change `server_name`. You can use `nano` (install with `apk add nano` if missing).
```
server {
server_name your.domain;
listen 80;
...
}
server {
server_name your.domain;
listen 443 ssl http2;
...
ssl_trusted_certificate /etc/letsencrypt/live/your.domain/chain.pem;
ssl_certificate /etc/letsencrypt/live/your.domain/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your.domain/privkey.pem;
...
}
```
* Enable and start nginx: * Enable and start nginx:
```shell ```shell
@ -193,10 +159,37 @@ doas rc-update add nginx
doas rc-service nginx start doas rc-service nginx start
``` ```
If you need to renew the certificate in the future, uncomment the relevant location block in the nginx config and run: * Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell ```shell
doas certbot certonly --email <your@emailaddress> -d <yourdomain> --webroot -w /var/lib/letsencrypt/ doas apk add certbot certbot-nginx
```
and then set it up:
```shell
doas mkdir -p /var/lib/letsencrypt/
doas certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
To automatically renew, set up a cron job like so:
```shell
# Enable the crond service
doas rc-update add crond
doas rc-service crond start
# Test that renewals work
doas certbot renew --cert-name yourinstance.tld --nginx --dry-run
# Add the renewal task to cron
echo '#!/bin/sh
certbot renew --cert-name yourinstance.tld --nginx
' | doas tee /etc/periodic/daily/renew-akkoma-cert
doas chmod +x /etc/periodic/daily/renew-akkoma-cert
``` ```
#### OpenRC service #### OpenRC service

View file

@ -136,16 +136,17 @@ If you want to open your newly installed instance to the world, you should run n
sudo pacman -S nginx sudo pacman -S nginx
``` ```
* Create directories for available and enabled sites: * Copy the example nginx configuration:
```shell ```shell
sudo mkdir -p /etc/nginx/sites-{available,enabled} sudo cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/conf.d/akkoma.conf
``` ```
* Append the following line at the end of the `http` block in `/etc/nginx/nginx.conf`: * Before starting nginx edit the configuration and change it to your needs (e.g. change servername, change cert paths)
* Enable and start nginx:
```Nginx ```shell
include sites-enabled/*; sudo systemctl enable --now nginx.service
``` ```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it: * Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
@ -158,32 +159,18 @@ and then set it up:
```shell ```shell
sudo mkdir -p /var/lib/letsencrypt/ sudo mkdir -p /var/lib/letsencrypt/
sudo certbot certonly --email <your@emailaddress> -d <yourdomain> --standalone sudo certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
``` ```
If that doesnt work, make sure, that nginx is not already running. If it still doesnt work, try setting up nginx first (change ssl “on” to “off” and try again). If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
--- To make sure renewals work, enable the appropriate systemd timer:
* Copy the example nginx configuration and activate it:
```shell ```shell
sudo cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/sites-available/akkoma.nginx sudo systemctl enable --now certbot-renew.timer
sudo ln -s /etc/nginx/sites-available/akkoma.nginx /etc/nginx/sites-enabled/akkoma.nginx
``` ```
* Before starting nginx edit the configuration and change it to your needs (e.g. change servername, change cert paths) Certificate renewal should be handled automatically by Certbot from now on.
* Enable and start nginx:
```shell
sudo systemctl enable --now nginx.service
```
If you need to renew the certificate in the future, uncomment the relevant location block in the nginx config and run:
```shell
sudo certbot certonly --email <your@emailaddress> -d <yourdomain> --webroot -w /var/lib/letsencrypt/
```
#### Other webserver/proxies #### Other webserver/proxies

View file

@ -155,23 +155,6 @@ If you want to open your newly installed instance to the world, you should run n
sudo apt install nginx sudo apt install nginx
``` ```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell
sudo apt install certbot
```
and then set it up:
```shell
sudo mkdir -p /var/lib/letsencrypt/
sudo certbot certonly --email <your@emailaddress> -d <yourdomain> --standalone
```
If that doesnt work, make sure, that nginx is not already running. If it still doesnt work, try setting up nginx first (change ssl “on” to “off” and try again).
---
* Copy the example nginx configuration and activate it: * Copy the example nginx configuration and activate it:
```shell ```shell
@ -186,12 +169,23 @@ sudo ln -s /etc/nginx/sites-available/akkoma.nginx /etc/nginx/sites-enabled/akko
sudo systemctl enable --now nginx.service sudo systemctl enable --now nginx.service
``` ```
If you need to renew the certificate in the future, uncomment the relevant location block in the nginx config and run: * Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell ```shell
sudo certbot certonly --email <your@emailaddress> -d <yourdomain> --webroot -w /var/lib/letsencrypt/ sudo apt install certbot python3-certbot-nginx
``` ```
and then set it up:
```shell
sudo mkdir -p /var/lib/letsencrypt/
sudo certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
Certificate renewal should be handled automatically by Certbot from now on.
#### Other webserver/proxies #### Other webserver/proxies
You can find example configurations for them in `/opt/akkoma/installation/`. You can find example configurations for them in `/opt/akkoma/installation/`.

View file

@ -125,7 +125,26 @@ cp docker-resources/Caddyfile.example docker-resources/Caddyfile
Then edit the TLD in your caddyfile to the domain you're serving on. Then edit the TLD in your caddyfile to the domain you're serving on.
Uncomment the `caddy` section in the docker compose file, Copy the commented out `caddy` section in `docker-compose.yml` into a new file called `docker-compose.override.yml` like so:
```yaml
version: "3.7"
services:
proxy:
image: caddy:2-alpine
restart: unless-stopped
links:
- akkoma
ports: [
"443:443",
"80:80"
]
volumes:
- ./docker-resources/Caddyfile:/etc/caddy/Caddyfile
- ./caddy-data:/data
- ./caddy-config:/config
```
then run `docker compose up -d` again. then run `docker compose up -d` again.
#### Running a reverse proxy on the host #### Running a reverse proxy on the host
@ -155,6 +174,12 @@ git pull
docker compose restart akkoma db docker compose restart akkoma db
``` ```
### Modifying the Docker services
If you want to modify the services defined in the docker compose file, you can
create a new file called `docker-compose.override.yml`. There you can add any
overrides or additional services without worrying about git conflicts when a
new release comes out.
#### Further reading #### Further reading
{! installation/further_reading.include !} {! installation/further_reading.include !}

View file

@ -135,23 +135,6 @@ If you want to open your newly installed instance to the world, you should run n
sudo dnf install nginx sudo dnf install nginx
``` ```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell
sudo dnf install certbot
```
and then set it up:
```shell
sudo mkdir -p /var/lib/letsencrypt/
sudo certbot certonly --email <your@emailaddress> -d <yourdomain> --standalone
```
If that doesnt work, make sure, that nginx is not already running. If it still doesnt work, try setting up nginx first (change ssl “on” to “off” and try again).
---
* Copy the example nginx configuration and activate it: * Copy the example nginx configuration and activate it:
```shell ```shell
@ -165,12 +148,23 @@ sudo cp /opt/akkoma/installation/nginx/akkoma.nginx /etc/nginx/conf.d/akkoma.con
sudo systemctl enable --now nginx.service sudo systemctl enable --now nginx.service
``` ```
If you need to renew the certificate in the future, uncomment the relevant location block in the nginx config and run: * Setup your SSL cert, using your method of choice or certbot. If using certbot, first install it:
```shell ```shell
sudo certbot certonly --email <your@emailaddress> -d <yourdomain> --webroot -w /var/lib/letsencrypt/ sudo dnf install certbot python3-certbot-nginx
``` ```
and then set it up:
```shell
sudo certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
Certificate renewal should be handled automatically by Certbot from now on.
#### Other webserver/proxies #### Other webserver/proxies
You can find example configurations for them in `/opt/akkoma/installation/`. You can find example configurations for them in `/opt/akkoma/installation/`.

View file

@ -1,8 +1,8 @@
## Required dependencies ## Required dependencies
* PostgreSQL 9.6+ * PostgreSQL 12+
* Elixir 1.14+ * Elixir 1.14+ (currently tested up to 1.16)
* Erlang OTP 25+ * Erlang OTP 25+ (currently tested up to OTP26)
* git * git
* file / libmagic * file / libmagic
* gcc (clang might also work) * gcc (clang might also work)

View file

@ -201,25 +201,6 @@ Assuming you want to open your newly installed federated social network to, well
include sites-enabled/*; include sites-enabled/*;
``` ```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, install it if you haven't already:
```shell
# emerge --ask app-crypt/certbot app-crypt/certbot-nginx
```
and then set it up:
```shell
# mkdir -p /var/lib/letsencrypt/
# certbot certonly --email <your@emailaddress> -d <yourdomain> --standalone
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. If that doesnt work, make sure, that nginx is not already running. If it still doesnt work, try setting up nginx first (change ssl “on” to “off” and try again). Often the answer to issues with certbot is to use the `--nginx` flag once you have nginx up and running.
If you are using any additional subdomains, such as for a media proxy, you can re-run the same command with the subdomain in question. When it comes time to renew later, you will not need to run multiple times for each domain, one renew will handle it.
---
* Copy the example nginx configuration and activate it: * Copy the example nginx configuration and activate it:
```shell ```shell
@ -237,9 +218,24 @@ Pay special attention to the line that begins with `ssl_ecdh_curve`. It is stong
```shell ```shell
# rc-update add nginx default # rc-update add nginx default
# /etc/init.d/nginx start # rc-service nginx start
``` ```
* Setup your SSL cert, using your method of choice or certbot. If using certbot, install it if you haven't already:
```shell
# emerge --ask app-crypt/certbot app-crypt/certbot-nginx
```
and then set it up:
```shell
# mkdir -p /var/lib/letsencrypt/
# certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
If you are using certbot, it is HIGHLY recommend you set up a cron job that renews your certificate, and that you install the suggested `certbot-nginx` plugin. If you don't do these things, you only have yourself to blame when your instance breaks suddenly because you forgot about it. If you are using certbot, it is HIGHLY recommend you set up a cron job that renews your certificate, and that you install the suggested `certbot-nginx` plugin. If you don't do these things, you only have yourself to blame when your instance breaks suddenly because you forgot about it.
First, ensure that the command you will be installing into your crontab works. First, ensure that the command you will be installing into your crontab works.

View file

@ -21,6 +21,33 @@ fork of Akkoma - luckily this isn't very hard.
You'll need to update the backend, then possibly the frontend, depending You'll need to update the backend, then possibly the frontend, depending
on your setup. on your setup.
## Backup diverging features
As time goes on Akkoma and Pleroma added or removed different features
and reorganised the database in a different way. If you want to be able to
migrate back to Pleroma without losing any affected data, youll want to
make a backup before starting the migration.
If you're not interested in migrating back, skip this section
*(although it might be a good idea to temporarily keep a full DB backup
just in case something unexpected happens during migration)*
As of 2024-02 you will want to keep a backup of:
- the entire `chats` and `chat_message_references` tables
The following columns are not deleted by a migration to Akkoma, but a migration
back to Pleroma or future Akkoma upgrades might affect them, so perhaps back them up as well:
- the `birthday` of users and their `show_birthday` setting
- the `expires_at` key of in the `user_relationships` table
*(used by temporary mutes)*
The way cached instance metadata is stored differs, but since those
will be refetched and updated anyway, theres no need for a backup.
Best check all newer migrations unique to Akkoma/Pleroma
to get an up-to-date picture of what needs to be kept.
## From Source ## From Source
If you're running the source Akkoma install, you'll need to set the If you're running the source Akkoma install, you'll need to set the
@ -34,16 +61,7 @@ git pull -r
# to run "git merge stable" instead (or develop if you want) # to run "git merge stable" instead (or develop if you want)
``` ```
### WARNING - Migrating from Pleroma Develop And compile as usual.
If you are on pleroma develop, and have updated since 2022-08, you may have issues with database migrations.
Please roll back the given migrations:
```bash
MIX_ENV=prod mix ecto.rollback --migrations-path priv/repo/optional_migrations/pleroma_develop_rollbacks -n5
```
Then compile, migrate and restart as usual.
## From OTP ## From OTP
@ -53,15 +71,44 @@ This will just be setting the update URL - find your flavour from the [mapping o
export FLAVOUR=[the flavour you found above] export FLAVOUR=[the flavour you found above]
./bin/pleroma_ctl update --zip-url https://akkoma-updates.s3-website.fr-par.scw.cloud/stable/akkoma-$FLAVOUR.zip ./bin/pleroma_ctl update --zip-url https://akkoma-updates.s3-website.fr-par.scw.cloud/stable/akkoma-$FLAVOUR.zip
./bin/pleroma_ctl migrate
``` ```
Then restart. When updating in the future, you canjust use When updating in the future, you can just use
```bash ```bash
./bin/pleroma_ctl update --branch stable ./bin/pleroma_ctl update --branch stable
``` ```
## Database Migrations
### WARNING - Migrating from Pleroma past 2022-08
If you are on Pleroma stable >= 2.5.0 or Pleroma develop, and
have updated since 2022-08, you may have issues with database migrations.
Please first roll back the given migrations:
=== "OTP"
```bash
./bin/pleroma_ctl rollback --migrations-path priv/repo/optional_migrations/pleroma_develop_rollbacks -n5
```
=== "From Source"
```bash
MIX_ENV=prod mix ecto.rollback --migrations-path priv/repo/optional_migrations/pleroma_develop_rollbacks -n5
```
### Applying Akkoma Database Migrations
Just run
=== "OTP"
```bash
./bin/pleroma_ctl migrate
```
=== "From Source"
```bash
MIX_ENV=prod mix ecto.migrate
```
## Frontend changes ## Frontend changes
Akkoma comes with a few frontend changes as well as backend ones, Akkoma comes with a few frontend changes as well as backend ones,
@ -130,3 +177,4 @@ MIX_ENV=prod mix ecto.rollback --to 20210416051708
``` ```
Then switch back to Pleroma for updates (similar to how was done to migrate to Akkoma), and remove the front-ends. The front-ends are installed in the `frontends` folder in the [static directory](../configuration/static_dir.md). Once you are back to Pleroma, you will need to run the database migrations again. See the Pleroma documentation for this. Then switch back to Pleroma for updates (similar to how was done to migrate to Akkoma), and remove the front-ends. The front-ends are installed in the `frontends` folder in the [static directory](../configuration/static_dir.md). Once you are back to Pleroma, you will need to run the database migrations again. See the Pleroma documentation for this.
After this use your previous backups to restore data from diverging features.

View file

@ -14,7 +14,7 @@ Note: the packages are not required with the current default settings of Akkoma.
`ImageMagick` is a set of tools to create, edit, compose, or convert bitmap images. `ImageMagick` is a set of tools to create, edit, compose, or convert bitmap images.
It is required for the following Akkoma features: It is required for the following Akkoma features:
* `Pleroma.Upload.Filters.Mogrify`, `Pleroma.Upload.Filters.Mogrifun` upload filters (related config: `Plaroma.Upload/filters` in `config/config.exs`) * `Pleroma.Upload.Filters.Mogrify`, `Pleroma.Upload.Filters.Mogrifun` upload filters (related config: `Pleroma.Upload/filters` in `config/config.exs`)
* Media preview proxy for still images (related config: `media_preview_proxy/enabled` in `config/config.exs`) * Media preview proxy for still images (related config: `media_preview_proxy/enabled` in `config/config.exs`)
## `ffmpeg` ## `ffmpeg`
@ -29,4 +29,5 @@ It is required for the following Akkoma features:
`exiftool` is media files metadata reader/writer. `exiftool` is media files metadata reader/writer.
It is required for the following Akkoma features: It is required for the following Akkoma features:
* `Pleroma.Upload.Filters.Exiftool` upload filter (related config: `Plaroma.Upload/filters` in `config/config.exs`) * `Pleroma.Upload.Filters.Exiftool.StripMetadata` upload filter (related config: `Pleroma.Upload/filters` in `config/config.exs`)
* `Pleroma.Upload.Filters.Exiftool.ReadDescription` upload filter (related config: `Pleroma.Upload/filters` in `config/config.exs`)

View file

@ -9,7 +9,7 @@ This guide covers a installation using an OTP release. To install Akkoma from so
* For installing OTP releases on RedHat-based distros like Fedora and Centos Stream, please follow [this guide](./otp_redhat_en.md) instead. * For installing OTP releases on RedHat-based distros like Fedora and Centos Stream, please follow [this guide](./otp_redhat_en.md) instead.
* A (sub)domain pointed to the machine * A (sub)domain pointed to the machine
You will be running commands as root. If you aren't root already, please elevate your priviledges by executing `sudo su`/`su`. You will be running commands as root. If you aren't root already, please elevate your priviledges by executing `sudo -i`/`su`.
While in theory OTP releases are possbile to install on any compatible machine, for the sake of simplicity this guide focuses only on Debian/Ubuntu and Alpine. While in theory OTP releases are possbile to install on any compatible machine, for the sake of simplicity this guide focuses only on Debian/Ubuntu and Alpine.
@ -176,11 +176,6 @@ su akkoma -s $SHELL -lc "./bin/pleroma stop"
### Setting up nginx and getting Let's Encrypt SSL certificaties ### Setting up nginx and getting Let's Encrypt SSL certificaties
#### Get a Let's Encrypt certificate
```sh
certbot certonly --standalone --preferred-challenges http -d yourinstance.tld
```
#### Copy Akkoma nginx configuration to the nginx folder #### Copy Akkoma nginx configuration to the nginx folder
The location of nginx configs is dependent on the distro The location of nginx configs is dependent on the distro
@ -209,6 +204,14 @@ $EDITOR path-to-nginx-config
# Verify that the config is valid # Verify that the config is valid
nginx -t nginx -t
``` ```
#### Get a Let's Encrypt certificate
```sh
certbot --nginx -d yourinstance.tld -d media.yourinstance.tld
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
#### Start nginx #### Start nginx
=== "Alpine" === "Alpine"
@ -252,32 +255,19 @@ If everything worked, you should see Akkoma-FE when visiting your domain. If tha
## Post installation ## Post installation
### Setting up auto-renew of the Let's Encrypt certificate ### Setting up auto-renew of the Let's Encrypt certificate
```sh
# Create the directory for webroot challenges
mkdir -p /var/lib/letsencrypt
# Uncomment the webroot method
$EDITOR path-to-nginx-config
# Verify that the config is valid
nginx -t
```
=== "Alpine" === "Alpine"
``` ```
# Restart nginx
rc-service nginx restart
# Start the cron daemon and make it start on boot # Start the cron daemon and make it start on boot
rc-service crond start rc-service crond start
rc-update add crond rc-update add crond
# Ensure the webroot menthod and post hook is working # Ensure the webroot menthod and post hook is working
certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --dry-run --post-hook 'rc-service nginx reload' certbot renew --cert-name yourinstance.tld --nginx --dry-run
# Add it to the daily cron # Add it to the daily cron
echo '#!/bin/sh echo '#!/bin/sh
certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --post-hook "rc-service nginx reload" certbot renew --cert-name yourinstance.tld --nginx
' > /etc/periodic/daily/renew-akkoma-cert ' > /etc/periodic/daily/renew-akkoma-cert
chmod +x /etc/periodic/daily/renew-akkoma-cert chmod +x /etc/periodic/daily/renew-akkoma-cert
@ -286,22 +276,7 @@ nginx -t
``` ```
=== "Debian/Ubuntu" === "Debian/Ubuntu"
``` This should be automatically enabled with the `certbot-renew.timer` systemd unit.
# Restart nginx
systemctl restart nginx
# Ensure the webroot menthod and post hook is working
certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --dry-run --post-hook 'systemctl reload nginx'
# Add it to the daily cron
echo '#!/bin/sh
certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --post-hook "systemctl reload nginx"
' > /etc/cron.daily/renew-akkoma-cert
chmod +x /etc/cron.daily/renew-akkoma-cert
# If everything worked the output should contain /etc/cron.daily/renew-akkoma-cert
run-parts --test /etc/cron.daily
```
## Create your first user and set as admin ## Create your first user and set as admin
```sh ```sh

View file

@ -82,6 +82,7 @@ Other than things bundled in the OTP release Akkoma depends on:
* PostgreSQL (also utilizes extensions in postgresql-contrib) * PostgreSQL (also utilizes extensions in postgresql-contrib)
* nginx (could be swapped with another reverse proxy but this guide covers only it) * nginx (could be swapped with another reverse proxy but this guide covers only it)
* certbot (for Let's Encrypt certificates, could be swapped with another ACME client, but this guide covers only it) * certbot (for Let's Encrypt certificates, could be swapped with another ACME client, but this guide covers only it)
* If you are using certbot, also install the `python3-certbot-nginx` package for the nginx plugin
* libmagic/file * libmagic/file
First, update your system, if not already done: First, update your system, if not already done:
@ -169,12 +170,6 @@ sudo -Hu akkoma ./bin/pleroma stop
### Setting up nginx and getting Let's Encrypt SSL certificaties ### Setting up nginx and getting Let's Encrypt SSL certificaties
#### Get a Let's Encrypt certificate
```shell
certbot certonly --standalone --preferred-challenges http -d yourinstance.tld
```
#### Copy Akkoma nginx configuration to the nginx folder #### Copy Akkoma nginx configuration to the nginx folder
```shell ```shell
@ -195,8 +190,15 @@ sudo nginx -t
sudo systemctl start nginx sudo systemctl start nginx
``` ```
At this point if you open your (sub)domain in a browser you should see a 502 error, that's because Akkoma is not started yet. #### Get a Let's Encrypt certificate
```shell
sudo certbot --email <your@emailaddress> -d <yourdomain> -d <media_domain> --nginx
```
If that doesn't work the first time, add `--dry-run` to further attempts to avoid being ratelimited as you identify the issue, and do not remove it until the dry run succeeds. A common source of problems are nginx config syntax errors; this can be checked for by running `nginx -t`.
If you're successful with obtaining the certificates, opening your (sub)domain in a browser will result in a 502 error, since Akkoma hasn't been started yet.
### Setting up a system service ### Setting up a system service
@ -239,19 +241,11 @@ sudo nginx -t
# Restart nginx # Restart nginx
sudo systemctl restart nginx sudo systemctl restart nginx
# Ensure the webroot menthod and post hook is working # Test that renewals work properly
sudo certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --dry-run --post-hook 'systemctl reload nginx' sudo certbot renew --cert-name yourinstance.tld --nginx --dry-run
# Add it to the daily cron
echo '#!/bin/sh
certbot renew --cert-name yourinstance.tld --webroot -w /var/lib/letsencrypt/ --post-hook "systemctl reload nginx"
' > /etc/cron.daily/renew-akkoma-cert
sudo chmod +x /etc/cron.daily/renew-akkoma-cert
# If everything worked the output should contain /etc/cron.daily/renew-akkoma-cert
sudo run-parts --test /etc/cron.daily
``` ```
Assuming the commands were run successfully, certbot should be able to renew your certificates automatically via the `certbot-renew.timer` systemd unit.
## Create your first user and set as admin ## Create your first user and set as admin
```shell ```shell

View file

@ -1,2 +0,0 @@
elixir_version=1.14.3
erlang_version=25.3

View file

@ -60,7 +60,7 @@ ServerTokens Prod
Include /etc/letsencrypt/options-ssl-apache.conf Include /etc/letsencrypt/options-ssl-apache.conf
# Uncomment the following to enable MediaProxy caching on disk # Uncomment the following to enable MediaProxy caching on disk
#CacheRoot /tmp/akkoma-media-cache/ #CacheRoot /var/tmp/akkoma-media-cache/
#CacheDirLevels 1 #CacheDirLevels 1
#CacheDirLength 2 #CacheDirLength 2
#CacheEnable disk /proxy #CacheEnable disk /proxy

View file

@ -16,7 +16,7 @@
SCRIPTNAME=${0##*/} SCRIPTNAME=${0##*/}
# mod_disk_cache directory # mod_disk_cache directory
CACHE_DIRECTORY="/tmp/akkoma-media-cache" CACHE_DIRECTORY="/var/tmp/akkoma-media-cache"
## Removes an item via the htcacheclean utility ## Removes an item via the htcacheclean utility
## $1 - the filename, can be a pattern . ## $1 - the filename, can be a pattern .

View file

@ -12,26 +12,22 @@ example.tld {
output file /var/log/caddy/akkoma.log output file /var/log/caddy/akkoma.log
} }
encode gzip
# this is explicitly IPv4 since Pleroma.Web.Endpoint binds on IPv4 only # this is explicitly IPv4 since Pleroma.Web.Endpoint binds on IPv4 only
# and `localhost.` resolves to [::0] on some systems: see issue #930 # and `localhost.` resolves to [::0] on some systems: see issue #930
reverse_proxy 127.0.0.1:4000 reverse_proxy 127.0.0.1:4000
# Uncomment if using a separate media subdomain @mediaproxy path /media/* /proxy/*
#@mediaproxy path /media/* /proxy/* handle @mediaproxy {
#handle @mediaproxy { redir https://media.example.tld{uri} permanent
# redir https://media.example.tld{uri} permanent }
#}
} }
# Uncomment if using a separate media subdomain media.example.tld {
#media.example.tld { @mediaproxy path /media/* /proxy/*
# @mediaproxy path /media/* /proxy/* reverse_proxy @mediaproxy 127.0.0.1:4000 {
# reverse_proxy @mediaproxy 127.0.0.1:4000 { transport http {
# transport http { response_header_timeout 10s
# response_header_timeout 10s read_timeout 15s
# read_timeout 15s }
# } }
# } }
#}

View file

@ -1,12 +1,9 @@
# default nginx site config for Akkoma # default nginx site config for Akkoma
# #
# Simple installation instructions: # See the documentation at docs.akkoma.dev for your particular distro/OS for
# 1. Install your TLS certificate, possibly using Let's Encrypt. # installation instructions.
# 2. Replace 'example.tld' with your instance's domain wherever it appears.
# 3. Copy this file to /etc/nginx/sites-available/ and then add a symlink to it
# in /etc/nginx/sites-enabled/ and run 'nginx -s reload' or restart nginx.
proxy_cache_path /tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=10g proxy_cache_path /var/tmp/akkoma-media-cache levels=1:2 keys_zone=akkoma_media_cache:10m max_size=1g
inactive=720m use_temp_path=off; inactive=720m use_temp_path=off;
# this is explicitly IPv4 since Pleroma.Web.Endpoint binds on IPv4 only # this is explicitly IPv4 since Pleroma.Web.Endpoint binds on IPv4 only
@ -15,25 +12,19 @@ upstream phoenix {
server 127.0.0.1:4000 max_fails=5 fail_timeout=60s; server 127.0.0.1:4000 max_fails=5 fail_timeout=60s;
} }
server { # If you are setting up TLS certificates without certbot, uncomment the
server_name example.tld; # following to enable HTTP -> HTTPS redirects. Certbot users don't need to do
# this as it will automatically do this for you.
listen 80; # server {
listen [::]:80; # server_name example.tld media.example.tld;
#
# Uncomment this if you need to use the 'webroot' method with certbot. Make sure # listen 80;
# that the directory exists and that it is accessible by the webserver. If you followed # listen [::]:80;
# the guide, you already ran 'mkdir -p /var/lib/letsencrypt' to create the folder. #
# You may need to load this file with the ssl server block commented out, run certbot # location / {
# to get the certificate, and then uncomment it. # return 301 https://$server_name$request_uri;
# # }
# location ~ /\.well-known/acme-challenge { # }
# root /var/lib/letsencrypt/;
# }
location / {
return 301 https://$server_name$request_uri;
}
}
# Enable SSL session caching for improved performance # Enable SSL session caching for improved performance
ssl_session_cache shared:ssl_session_cache:10m; ssl_session_cache shared:ssl_session_cache:10m;
@ -41,22 +32,29 @@ ssl_session_cache shared:ssl_session_cache:10m;
server { server {
server_name example.tld; server_name example.tld;
listen 443 ssl http2; # Once certbot is set up, this will automatically be updated to listen to
listen [::]:443 ssl http2; # port 443 with TLS alongside a redirect from plaintext HTTP.
ssl_session_timeout 1d; listen 80;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions listen [::]:80;
ssl_session_tickets off;
ssl_trusted_certificate /etc/letsencrypt/live/example.tld/chain.pem; # If you are not using Certbot, comment out the above and uncomment/edit the following
ssl_certificate /etc/letsencrypt/live/example.tld/fullchain.pem; # listen 443 ssl http2;
ssl_certificate_key /etc/letsencrypt/live/example.tld/privkey.pem; # listen [::]:443 ssl http2;
# ssl_session_timeout 1d;
# ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
# ssl_session_tickets off;
#
# ssl_trusted_certificate /etc/letsencrypt/live/example.tld/chain.pem;
# ssl_certificate /etc/letsencrypt/live/example.tld/fullchain.pem;
# ssl_certificate_key /etc/letsencrypt/live/example.tld/privkey.pem;
#
# ssl_protocols TLSv1.2 TLSv1.3;
# ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
# ssl_prefer_server_ciphers off;
# ssl_ecdh_curve X25519:prime256v1:secp384r1:secp521r1;
# ssl_stapling on;
# ssl_stapling_verify on;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers off;
ssl_ecdh_curve X25519:prime256v1:secp384r1:secp521r1;
ssl_stapling on;
ssl_stapling_verify on;
gzip_vary on; gzip_vary on;
gzip_proxied any; gzip_proxied any;
@ -75,9 +73,43 @@ server {
proxy_set_header Host $http_host; proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/(media|proxy) {
return 404;
}
location / { location / {
proxy_pass http://phoenix; proxy_pass http://phoenix;
} }
}
# Upload and MediaProxy Subdomain
# (see main domain setup for more details)
server {
server_name media.example.tld;
# Same as above, will be updated to HTTPS once certbot is set up.
listen 80;
listen [::]:80;
# If you are not using certbot, comment the above and copy all the ssl
# stuff from above into here.
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/activity+json application/atom+xml;
# the nginx default is 1m, not enough for large media uploads
client_max_body_size 16m;
ignore_invalid_headers off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
location ~ ^/(media|proxy) { location ~ ^/(media|proxy) {
proxy_cache akkoma_media_cache; proxy_cache akkoma_media_cache;
@ -91,4 +123,8 @@ server {
chunked_transfer_encoding on; chunked_transfer_encoding on;
proxy_pass http://phoenix; proxy_pass http://phoenix;
} }
location / {
return 404;
}
} }

View file

@ -5,7 +5,7 @@
SCRIPTNAME=${0##*/} SCRIPTNAME=${0##*/}
# NGINX cache directory # NGINX cache directory
CACHE_DIRECTORY="/tmp/akkoma-media-cache" CACHE_DIRECTORY="/var/tmp/akkoma-media-cache"
## Return the files where the items are cached. ## Return the files where the items are cached.
## $1 - the filename, can be a pattern . ## $1 - the filename, can be a pattern .

View file

@ -16,7 +16,7 @@ defmodule Mix.Pleroma do
:fast_html, :fast_html,
:oban :oban
] ]
@cachex_children ["object", "user", "scrubber", "web_resp"] @cachex_children ["object", "user", "scrubber", "web_resp", "http_backoff"]
@doc "Common functions to be reused in mix tasks" @doc "Common functions to be reused in mix tasks"
def start_pleroma do def start_pleroma do
Pleroma.Config.Holder.save_default() Pleroma.Config.Holder.save_default()
@ -112,18 +112,26 @@ defmodule Mix.Pleroma do
end end
end end
def shell_info(message) do def shell_info(message) when is_binary(message) or is_list(message) do
if mix_shell?(), if mix_shell?(),
do: Mix.shell().info(message), do: Mix.shell().info(message),
else: IO.puts(message) else: IO.puts(message)
end end
def shell_error(message) do def shell_info(message) do
shell_info("#{inspect(message)}")
end
def shell_error(message) when is_binary(message) or is_list(message) do
if mix_shell?(), if mix_shell?(),
do: Mix.shell().error(message), do: Mix.shell().error(message),
else: IO.puts(:stderr, message) else: IO.puts(:stderr, message)
end end
def shell_error(message) do
shell_error("#{inspect(message)}")
end
@doc "Performs a safe check whether `Mix.shell/0` is available (does not raise if Mix is not loaded)" @doc "Performs a safe check whether `Mix.shell/0` is available (does not raise if Mix is not loaded)"
def mix_shell?, do: :erlang.function_exported(Mix, :shell, 0) def mix_shell?, do: :erlang.function_exported(Mix, :shell, 0)

View file

@ -8,7 +8,6 @@ defmodule Mix.Tasks.Pleroma.Activity do
alias Pleroma.User alias Pleroma.User
alias Pleroma.Web.CommonAPI alias Pleroma.Web.CommonAPI
alias Pleroma.Pagination alias Pleroma.Pagination
require Logger
import Mix.Pleroma import Mix.Pleroma
import Ecto.Query import Ecto.Query
@ -17,7 +16,7 @@ defmodule Mix.Tasks.Pleroma.Activity do
id id
|> Activity.get_by_id() |> Activity.get_by_id()
|> IO.inspect() |> shell_info()
end end
def run(["delete_by_keyword", user, keyword | _rest]) do def run(["delete_by_keyword", user, keyword | _rest]) do
@ -35,7 +34,7 @@ defmodule Mix.Tasks.Pleroma.Activity do
) )
|> Enum.map(fn x -> CommonAPI.delete(x.id, u) end) |> Enum.map(fn x -> CommonAPI.delete(x.id, u) end)
|> Enum.count() |> Enum.count()
|> IO.puts() |> shell_info()
end end
defp query_with(q, search_query) do defp query_with(q, search_query) do

View file

@ -10,7 +10,6 @@ defmodule Mix.Tasks.Pleroma.Config do
alias Pleroma.ConfigDB alias Pleroma.ConfigDB
alias Pleroma.Repo alias Pleroma.Repo
alias Pleroma.Config.ConfigurableFromDatabase
@shortdoc "Manages the location of the config" @shortdoc "Manages the location of the config"
@moduledoc File.read!("docs/docs/administration/CLI_tasks/config.md") @moduledoc File.read!("docs/docs/administration/CLI_tasks/config.md")
@ -245,38 +244,6 @@ defmodule Mix.Tasks.Pleroma.Config do
end end
end end
# Primarily a developer tool to check nothing was missed from
# db configwhitelist
def run(["check-allowed"]) do
start_pleroma()
Pleroma.Docs.JSON.compile()
raw = Pleroma.Docs.JSON.compiled_descriptions()
whitelisted = Enum.filter(raw, &ConfigurableFromDatabase.whitelisted_config?/1)
raw_map = MapSet.new(raw)
whitelisted_map = MapSet.new(whitelisted)
IO.puts(
"Config keys defined in description.exs but not listed as explicitly allowed in the db"
)
IO.puts(
" Please check that standard admins should not need to touch the listed settings whilst the server is live."
)
IO.puts(
" !! Please remember that admins are not neccesarily sysadmins nor are they immune to oauth/password leakage."
)
IO.puts("-------------")
MapSet.difference(raw_map, whitelisted_map)
|> Enum.each(fn map ->
IO.puts("#{map[:group]}, #{map[:key]} (#{map[:label]})")
IO.puts(map[:db_exclusion_reason] || "No exclusion reason set")
IO.puts("++")
end)
end
@spec migrate_to_db(Path.t() | nil) :: any() @spec migrate_to_db(Path.t() | nil) :: any()
def migrate_to_db(file_path \\ nil) do def migrate_to_db(file_path \\ nil) do
with :ok <- Pleroma.Config.DeprecationWarnings.warn() do with :ok <- Pleroma.Config.DeprecationWarnings.warn() do
@ -319,9 +286,6 @@ defmodule Mix.Tasks.Pleroma.Config do
defp create(group, settings) do defp create(group, settings) do
group group
|> Pleroma.Config.Loader.filter_group(settings) |> Pleroma.Config.Loader.filter_group(settings)
|> Enum.filter(fn {key, _value} ->
Pleroma.Config.ConfigurableFromDatabase.whitelisted_config?(group, key)
end)
|> Enum.each(fn {key, value} -> |> Enum.each(fn {key, value} ->
{:ok, _} = ConfigDB.update_or_create(%{group: group, key: key, value: value}) {:ok, _} = ConfigDB.update_or_create(%{group: group, key: key, value: value})

View file

@ -20,6 +20,102 @@ defmodule Mix.Tasks.Pleroma.Database do
@shortdoc "A collection of database related tasks" @shortdoc "A collection of database related tasks"
@moduledoc File.read!("docs/docs/administration/CLI_tasks/database.md") @moduledoc File.read!("docs/docs/administration/CLI_tasks/database.md")
defp maybe_limit(query, limit_cnt) do
if is_number(limit_cnt) and limit_cnt > 0 do
limit(query, [], ^limit_cnt)
else
query
end
end
defp limit_statement(limit) when is_number(limit) do
if limit > 0 do
"LIMIT #{limit}"
else
""
end
end
defp prune_orphaned_activities_singles(limit) do
%{:num_rows => del_single} =
"""
delete from public.activities
where id in (
select a.id from public.activities a
left join public.objects o on a.data ->> 'object' = o.data ->> 'id'
left join public.activities a2 on a.data ->> 'object' = a2.data ->> 'id'
left join public.users u on a.data ->> 'object' = u.ap_id
where not a.local
and jsonb_typeof(a."data" -> 'object') = 'string'
and o.id is null
and a2.id is null
and u.id is null
#{limit_statement(limit)}
)
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Prune activity singles: deleted #{del_single} rows...")
del_single
end
defp prune_orphaned_activities_array(limit) do
%{:num_rows => del_array} =
"""
delete from public.activities
where id in (
select a.id from public.activities a
join json_array_elements_text((a."data" -> 'object')::json) as j
on a.data->>'type' = 'Flag'
left join public.objects o on j.value = o.data ->> 'id'
left join public.activities a2 on j.value = a2.data ->> 'id'
left join public.users u on j.value = u.ap_id
group by a.id
having max(o.data ->> 'id') is null
and max(a2.data ->> 'id') is null
and max(u.ap_id) is null
#{limit_statement(limit)}
)
"""
|> Repo.query!([], timeout: :infinity)
Logger.info("Prune activity arrays: deleted #{del_array} rows...")
del_array
end
def prune_orphaned_activities(limit \\ 0, opts \\ []) when is_number(limit) do
# Activities can either refer to a single object id, and array of object ids
# or contain an inlined object (at least after going through our normalisation)
#
# Flag is the only type we support with an array (and always has arrays).
# Update the only one with inlined objects.
#
# We already regularly purge old Delete, Undo, Update and Remove and if
# rejected Follow requests anyway; no need to explicitly deal with those here.
#
# Since theres an index on types and there are typically only few Flag
# activites, its _much_ faster to utilise the index. To avoid accidentally
# deleting useful activities should more types be added, keep typeof for singles.
# Prune activities who link to an array of objects
del_array =
if Keyword.get(opts, :arrays, true) do
prune_orphaned_activities_array(limit)
else
0
end
# Prune activities who link to a single object
del_single =
if Keyword.get(opts, :singles, true) do
prune_orphaned_activities_singles(limit)
else
0
end
del_single + del_array
end
def run(["remove_embedded_objects" | args]) do def run(["remove_embedded_objects" | args]) do
{options, [], []} = {options, [], []} =
OptionParser.parse( OptionParser.parse(
@ -62,6 +158,37 @@ defmodule Mix.Tasks.Pleroma.Database do
) )
end end
def run(["prune_orphaned_activities" | args]) do
{options, [], []} =
OptionParser.parse(
args,
strict: [
limit: :integer,
singles: :boolean,
arrays: :boolean
]
)
start_pleroma()
{limit, options} = Keyword.pop(options, :limit, 0)
log_message = "Pruning orphaned activities"
log_message =
if limit > 0 do
log_message <> ", limiting deletion to #{limit} rows"
else
log_message
end
Logger.info(log_message)
deleted = prune_orphaned_activities(limit, options)
Logger.info("Deleted #{deleted} rows")
end
def run(["prune_objects" | args]) do def run(["prune_objects" | args]) do
{options, [], []} = {options, [], []} =
OptionParser.parse( OptionParser.parse(
@ -70,7 +197,8 @@ defmodule Mix.Tasks.Pleroma.Database do
vacuum: :boolean, vacuum: :boolean,
keep_threads: :boolean, keep_threads: :boolean,
keep_non_public: :boolean, keep_non_public: :boolean,
prune_orphaned_activities: :boolean prune_orphaned_activities: :boolean,
limit: :integer
] ]
) )
@ -79,6 +207,8 @@ defmodule Mix.Tasks.Pleroma.Database do
deadline = Pleroma.Config.get([:instance, :remote_post_retention_days]) deadline = Pleroma.Config.get([:instance, :remote_post_retention_days])
time_deadline = NaiveDateTime.utc_now() |> NaiveDateTime.add(-(deadline * 86_400)) time_deadline = NaiveDateTime.utc_now() |> NaiveDateTime.add(-(deadline * 86_400))
limit_cnt = Keyword.get(options, :limit, 0)
log_message = "Pruning objects older than #{deadline} days" log_message = "Pruning objects older than #{deadline} days"
log_message = log_message =
@ -110,129 +240,124 @@ defmodule Mix.Tasks.Pleroma.Database do
log_message log_message
end end
log_message =
if limit_cnt > 0 do
log_message <> ", limiting to #{limit_cnt} rows"
else
log_message
end
Logger.info(log_message) Logger.info(log_message)
if Keyword.get(options, :keep_threads) do {del_obj, _} =
# We want to delete objects from threads where if Keyword.get(options, :keep_threads) do
# 1. the newest post is still old # We want to delete objects from threads where
# 2. none of the activities is local # 1. the newest post is still old
# 3. none of the activities is bookmarked # 2. none of the activities is local
# 4. optionally none of the posts is non-public # 3. none of the activities is bookmarked
deletable_context = # 4. optionally none of the posts is non-public
if Keyword.get(options, :keep_non_public) do deletable_context =
Pleroma.Activity if Keyword.get(options, :keep_non_public) do
|> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id) Pleroma.Activity
|> group_by([a], fragment("? ->> 'context'::text", a.data)) |> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id)
|> having( |> group_by([a], fragment("? ->> 'context'::text", a.data))
[a], |> having(
not fragment( [a],
# Posts (checked on Create Activity) is non-public not fragment(
"bool_or((not(?->'to' \\? ? OR ?->'cc' \\? ?)) and ? ->> 'type' = 'Create')", # Posts (checked on Create Activity) is non-public
a.data, "bool_or((not(?->'to' \\? ? OR ?->'cc' \\? ?)) and ? ->> 'type' = 'Create')",
^Pleroma.Constants.as_public(), a.data,
a.data, ^Pleroma.Constants.as_public(),
^Pleroma.Constants.as_public(), a.data,
a.data ^Pleroma.Constants.as_public(),
a.data
)
) )
) else
else Pleroma.Activity
Pleroma.Activity |> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id)
|> join(:left, [a], b in Pleroma.Bookmark, on: a.id == b.activity_id) |> group_by([a], fragment("? ->> 'context'::text", a.data))
|> group_by([a], fragment("? ->> 'context'::text", a.data)) end
end |> having([a], max(a.updated_at) < ^time_deadline)
|> having([a], max(a.updated_at) < ^time_deadline) |> having([a], not fragment("bool_or(?)", a.local))
|> having([a], not fragment("bool_or(?)", a.local)) |> having([_, b], fragment("max(?::text) is null", b.id))
|> having([_, b], fragment("max(?::text) is null", b.id)) |> maybe_limit(limit_cnt)
|> select([a], fragment("? ->> 'context'::text", a.data)) |> select([a], fragment("? ->> 'context'::text", a.data))
Pleroma.Object
|> where([o], fragment("? ->> 'context'::text", o.data) in subquery(deletable_context))
else
if Keyword.get(options, :keep_non_public) do
Pleroma.Object Pleroma.Object
|> where( |> where([o], fragment("? ->> 'context'::text", o.data) in subquery(deletable_context))
[o],
fragment(
"?->'to' \\? ? OR ?->'cc' \\? ?",
o.data,
^Pleroma.Constants.as_public(),
o.data,
^Pleroma.Constants.as_public()
)
)
else else
deletable =
if Keyword.get(options, :keep_non_public) do
Pleroma.Object
|> where(
[o],
fragment(
"?->'to' \\? ? OR ?->'cc' \\? ?",
o.data,
^Pleroma.Constants.as_public(),
o.data,
^Pleroma.Constants.as_public()
)
)
else
Pleroma.Object
end
|> where([o], o.updated_at < ^time_deadline)
|> where(
[o],
fragment("split_part(?->>'actor', '/', 3) != ?", o.data, ^Pleroma.Web.Endpoint.host())
)
|> maybe_limit(limit_cnt)
|> select([o], o.id)
Pleroma.Object Pleroma.Object
|> where([o], o.id in subquery(deletable))
end end
|> where([o], o.updated_at < ^time_deadline) |> Repo.delete_all(timeout: :infinity)
|> where(
[o], Logger.info("Deleted #{del_obj} objects...")
fragment("split_part(?->>'actor', '/', 3) != ?", o.data, ^Pleroma.Web.Endpoint.host())
)
end
|> Repo.delete_all(timeout: :infinity)
if !Keyword.get(options, :keep_threads) do if !Keyword.get(options, :keep_threads) do
# Without the --keep-threads option, it's possible that bookmarked # Without the --keep-threads option, it's possible that bookmarked
# objects have been deleted. We remove the corresponding bookmarks. # objects have been deleted. We remove the corresponding bookmarks.
""" %{:num_rows => del_bookmarks} =
delete from public.bookmarks """
where id in ( delete from public.bookmarks
select b.id from public.bookmarks b where id in (
left join public.activities a on b.activity_id = a.id select b.id from public.bookmarks b
left join public.objects o on a."data" ->> 'object' = o.data ->> 'id' left join public.activities a on b.activity_id = a.id
where o.id is null left join public.objects o on a."data" ->> 'object' = o.data ->> 'id'
) where o.id is null
""" )
|> Repo.query([], timeout: :infinity) """
|> Repo.query!([], timeout: :infinity)
Logger.info("Deleted #{del_bookmarks} orphaned bookmarks...")
end end
if Keyword.get(options, :prune_orphaned_activities) do if Keyword.get(options, :prune_orphaned_activities) do
# Prune activities who link to a single object del_activities = prune_orphaned_activities()
""" Logger.info("Deleted #{del_activities} orphaned activities...")
delete from public.activities
where id in (
select a.id from public.activities a
left join public.objects o on a.data ->> 'object' = o.data ->> 'id'
left join public.activities a2 on a.data ->> 'object' = a2.data ->> 'id'
left join public.users u on a.data ->> 'object' = u.ap_id
where not a.local
and jsonb_typeof(a."data" -> 'object') = 'string'
and o.id is null
and a2.id is null
and u.id is null
)
"""
|> Repo.query([], timeout: :infinity)
# Prune activities who link to an array of objects
"""
delete from public.activities
where id in (
select a.id from public.activities a
join json_array_elements_text((a."data" -> 'object')::json) as j on jsonb_typeof(a."data" -> 'object') = 'array'
left join public.objects o on j.value = o.data ->> 'id'
left join public.activities a2 on j.value = a2.data ->> 'id'
left join public.users u on j.value = u.ap_id
group by a.id
having max(o.data ->> 'id') is null
and max(a2.data ->> 'id') is null
and max(u.ap_id) is null
)
"""
|> Repo.query([], timeout: :infinity)
end end
""" %{:num_rows => del_hashtags} =
DELETE FROM hashtags AS ht """
WHERE NOT EXISTS ( DELETE FROM hashtags AS ht
SELECT 1 FROM hashtags_objects hto WHERE NOT EXISTS (
WHERE ht.id = hto.hashtag_id) SELECT 1 FROM hashtags_objects hto
""" WHERE ht.id = hto.hashtag_id)
|> Repo.query() """
|> Repo.query!()
Logger.info("Deleted #{del_hashtags} no longer used hashtags...")
if Keyword.get(options, :vacuum) do if Keyword.get(options, :vacuum) do
Logger.info("Starting vacuum...")
Maintenance.vacuum("full") Maintenance.vacuum("full")
end end
Logger.info("All done!")
end end
def run(["prune_task"]) do def run(["prune_task"]) do

View file

@ -3,7 +3,6 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
alias Pleroma.Repo alias Pleroma.Repo
alias Pleroma.User alias Pleroma.User
require Logger
require Pleroma.Constants require Pleroma.Constants
import Mix.Pleroma import Mix.Pleroma
@ -14,13 +13,20 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
start_pleroma() start_pleroma()
Pleroma.HTTP.get(url) Pleroma.HTTP.get(url)
|> shell_info()
end
def run(["fetch_object", url]) do
start_pleroma()
Pleroma.Object.Fetcher.fetch_object_from_id(url)
|> IO.inspect() |> IO.inspect()
end end
def run(["home_timeline", nickname]) do def run(["home_timeline", nickname]) do
start_pleroma() start_pleroma()
user = Repo.get_by!(User, nickname: nickname) user = Repo.get_by!(User, nickname: nickname)
Logger.info("Home timeline query #{user.nickname}") shell_info("Home timeline query #{user.nickname}")
followed_hashtags = followed_hashtags =
user user
@ -49,14 +55,14 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20) |> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity) Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts() |> shell_info()
end end
def run(["user_timeline", nickname, reading_nickname]) do def run(["user_timeline", nickname, reading_nickname]) do
start_pleroma() start_pleroma()
user = Repo.get_by!(User, nickname: nickname) user = Repo.get_by!(User, nickname: nickname)
reading_user = Repo.get_by!(User, nickname: reading_nickname) reading_user = Repo.get_by!(User, nickname: reading_nickname)
Logger.info("User timeline query #{user.nickname}") shell_info("User timeline query #{user.nickname}")
params = params =
%{limit: 20} %{limit: 20}
@ -80,7 +86,7 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20) |> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity) Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts() |> shell_info()
end end
def run(["notifications", nickname]) do def run(["notifications", nickname]) do
@ -96,7 +102,7 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20) |> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity) Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts() |> shell_info()
end end
def run(["known_network", nickname]) do def run(["known_network", nickname]) do
@ -122,6 +128,6 @@ defmodule Mix.Tasks.Pleroma.Diagnostics do
|> limit(20) |> limit(20)
Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity) Ecto.Adapters.SQL.explain(Repo, :all, query, analyze: true, timeout: :infinity)
|> IO.puts() |> shell_info()
end end
end end

View file

@ -27,11 +27,11 @@ defmodule Mix.Tasks.Pleroma.Emoji do
] ]
for {param, value} <- to_print do for {param, value} <- to_print do
IO.puts(IO.ANSI.format([:bright, param, :normal, ": ", value])) shell_info(IO.ANSI.format([:bright, param, :normal, ": ", value]))
end end
# A newline # A newline
IO.puts("") shell_info("")
end) end)
end end
@ -49,7 +49,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
pack = manifest[pack_name] pack = manifest[pack_name]
src = pack["src"] src = pack["src"]
IO.puts( shell_info(
IO.ANSI.format([ IO.ANSI.format([
"Downloading ", "Downloading ",
:bright, :bright,
@ -67,9 +67,9 @@ defmodule Mix.Tasks.Pleroma.Emoji do
sha_status_text = ["SHA256 of ", :bright, pack_name, :normal, " source file is ", :bright] sha_status_text = ["SHA256 of ", :bright, pack_name, :normal, " source file is ", :bright]
if archive_sha == String.upcase(pack["src_sha256"]) do if archive_sha == String.upcase(pack["src_sha256"]) do
IO.puts(IO.ANSI.format(sha_status_text ++ [:green, "OK"])) shell_info(IO.ANSI.format(sha_status_text ++ [:green, "OK"]))
else else
IO.puts(IO.ANSI.format(sha_status_text ++ [:red, "BAD"])) shell_info(IO.ANSI.format(sha_status_text ++ [:red, "BAD"]))
raise "Bad SHA256 for #{pack_name}" raise "Bad SHA256 for #{pack_name}"
end end
@ -80,7 +80,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
|> Path.dirname() |> Path.dirname()
|> Path.join(pack["files"]) |> Path.join(pack["files"])
IO.puts( shell_info(
IO.ANSI.format([ IO.ANSI.format([
"Fetching the file list for ", "Fetching the file list for ",
:bright, :bright,
@ -94,7 +94,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
files = fetch_and_decode!(files_loc) files = fetch_and_decode!(files_loc)
IO.puts(IO.ANSI.format(["Unpacking ", :bright, pack_name])) shell_info(IO.ANSI.format(["Unpacking ", :bright, pack_name]))
pack_path = pack_path =
Path.join([ Path.join([
@ -115,7 +115,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
file_list: files_to_unzip file_list: files_to_unzip
) )
IO.puts(IO.ANSI.format(["Writing pack.json for ", :bright, pack_name])) shell_info(IO.ANSI.format(["Writing pack.json for ", :bright, pack_name]))
pack_json = %{ pack_json = %{
pack: %{ pack: %{
@ -132,7 +132,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
File.write!(Path.join(pack_path, "pack.json"), Jason.encode!(pack_json, pretty: true)) File.write!(Path.join(pack_path, "pack.json"), Jason.encode!(pack_json, pretty: true))
Pleroma.Emoji.reload() Pleroma.Emoji.reload()
else else
IO.puts(IO.ANSI.format([:bright, :red, "No pack named \"#{pack_name}\" found"])) shell_info(IO.ANSI.format([:bright, :red, "No pack named \"#{pack_name}\" found"]))
end end
end end
end end
@ -180,14 +180,14 @@ defmodule Mix.Tasks.Pleroma.Emoji do
custom_exts custom_exts
end end
IO.puts("Using #{Enum.join(exts, " ")} extensions") shell_info("Using #{Enum.join(exts, " ")} extensions")
IO.puts("Downloading the pack and generating SHA256") shell_info("Downloading the pack and generating SHA256")
{:ok, %{body: binary_archive}} = Pleroma.HTTP.get(src) {:ok, %{body: binary_archive}} = Pleroma.HTTP.get(src)
archive_sha = :crypto.hash(:sha256, binary_archive) |> Base.encode16() archive_sha = :crypto.hash(:sha256, binary_archive) |> Base.encode16()
IO.puts("SHA256 is #{archive_sha}") shell_info("SHA256 is #{archive_sha}")
pack_json = %{ pack_json = %{
name => %{ name => %{
@ -208,7 +208,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
File.write!(files_name, Jason.encode!(emoji_map, pretty: true)) File.write!(files_name, Jason.encode!(emoji_map, pretty: true))
IO.puts(""" shell_info("""
#{files_name} has been created and contains the list of all found emojis in the pack. #{files_name} has been created and contains the list of all found emojis in the pack.
Please review the files in the pack and remove those not needed. Please review the files in the pack and remove those not needed.
@ -230,11 +230,11 @@ defmodule Mix.Tasks.Pleroma.Emoji do
) )
) )
IO.puts("#{pack_file} has been updated with the #{name} pack") shell_info("#{pack_file} has been updated with the #{name} pack")
else else
File.write!(pack_file, Jason.encode!(pack_json, pretty: true)) File.write!(pack_file, Jason.encode!(pack_json, pretty: true))
IO.puts("#{pack_file} has been created with the #{name} pack") shell_info("#{pack_file} has been created with the #{name} pack")
end end
Pleroma.Emoji.reload() Pleroma.Emoji.reload()
@ -243,7 +243,7 @@ defmodule Mix.Tasks.Pleroma.Emoji do
def run(["reload"]) do def run(["reload"]) do
start_pleroma() start_pleroma()
Pleroma.Emoji.reload() Pleroma.Emoji.reload()
IO.puts("Emoji packs have been reloaded.") shell_info("Emoji packs have been reloaded.")
end end
defp fetch_and_decode!(from) do defp fetch_and_decode!(from) do

View file

@ -20,6 +20,7 @@ defmodule Mix.Tasks.Pleroma.Instance do
output: :string, output: :string,
output_psql: :string, output_psql: :string,
domain: :string, domain: :string,
media_url: :string,
instance_name: :string, instance_name: :string,
admin_email: :string, admin_email: :string,
notify_email: :string, notify_email: :string,
@ -34,9 +35,9 @@ defmodule Mix.Tasks.Pleroma.Instance do
static_dir: :string, static_dir: :string,
listen_ip: :string, listen_ip: :string,
listen_port: :string, listen_port: :string,
strip_uploads: :string, strip_uploads_metadata: :string,
anonymize_uploads: :string, read_uploads_description: :string,
dedupe_uploads: :string anonymize_uploads: :string
], ],
aliases: [ aliases: [
o: :output, o: :output,
@ -64,6 +65,14 @@ defmodule Mix.Tasks.Pleroma.Instance do
":" ":"
) ++ [443] ) ++ [443]
media_url =
get_option(
options,
:media_url,
"What base url will uploads use? (e.g https://media.example.com/media)\n" <>
" Generally this should NOT use the same domain as the instance "
)
name = name =
get_option( get_option(
options, options,
@ -161,21 +170,38 @@ defmodule Mix.Tasks.Pleroma.Instance do
) )
|> Path.expand() |> Path.expand()
{strip_uploads_message, strip_uploads_default} = {strip_uploads_metadata_message, strip_uploads_metadata_default} =
if Pleroma.Utils.command_available?("exiftool") do if Pleroma.Utils.command_available?("exiftool") do
{"Do you want to strip location (GPS) data from uploaded images? This requires exiftool, it was detected as installed. (y/n)", {"Do you want to strip metadata from uploaded images? This requires exiftool, it was detected as installed. (y/n)",
"y"} "y"}
else else
{"Do you want to strip location (GPS) data from uploaded images? This requires exiftool, it was detected as not installed, please install it if you answer yes. (y/n)", {"Do you want to strip metadata from uploaded images? This requires exiftool, it was detected as not installed, please install it if you answer yes. (y/n)",
"n"} "n"}
end end
strip_uploads = strip_uploads_metadata =
get_option( get_option(
options, options,
:strip_uploads, :strip_uploads_metadata,
strip_uploads_message, strip_uploads_metadata_message,
strip_uploads_default strip_uploads_metadata_default
) === "y"
{read_uploads_description_message, read_uploads_description_default} =
if Pleroma.Utils.command_available?("exiftool") do
{"Do you want to read data from uploaded files so clients can use it to prefill fields like image description? This requires exiftool, it was detected as installed. (y/n)",
"y"}
else
{"Do you want to read data from uploaded files so clients can use it to prefill fields like image description? This requires exiftool, it was detected as not installed, please install it if you answer yes. (y/n)",
"n"}
end
read_uploads_description =
get_option(
options,
:read_uploads_description,
read_uploads_description_message,
read_uploads_description_default
) === "y" ) === "y"
anonymize_uploads = anonymize_uploads =
@ -186,14 +212,6 @@ defmodule Mix.Tasks.Pleroma.Instance do
"n" "n"
) === "y" ) === "y"
dedupe_uploads =
get_option(
options,
:dedupe_uploads,
"Do you want to deduplicate uploaded files? (y/n)",
"n"
) === "y"
Config.put([:instance, :static_dir], static_dir) Config.put([:instance, :static_dir], static_dir)
secret = :crypto.strong_rand_bytes(64) |> Base.encode64() |> binary_part(0, 64) secret = :crypto.strong_rand_bytes(64) |> Base.encode64() |> binary_part(0, 64)
@ -207,6 +225,7 @@ defmodule Mix.Tasks.Pleroma.Instance do
EEx.eval_file( EEx.eval_file(
template_dir <> "/sample_config.eex", template_dir <> "/sample_config.eex",
domain: domain, domain: domain,
media_url: media_url,
port: port, port: port,
email: email, email: email,
notify_email: notify_email, notify_email: notify_email,
@ -229,9 +248,9 @@ defmodule Mix.Tasks.Pleroma.Instance do
listen_port: listen_port, listen_port: listen_port,
upload_filters: upload_filters:
upload_filters(%{ upload_filters(%{
strip: strip_uploads, strip_metadata: strip_uploads_metadata,
anonymize: anonymize_uploads, read_description: read_uploads_description,
dedupe: dedupe_uploads anonymize: anonymize_uploads
}) })
) )
@ -305,11 +324,20 @@ defmodule Mix.Tasks.Pleroma.Instance do
end end
defp upload_filters(filters) when is_map(filters) do defp upload_filters(filters) when is_map(filters) do
enabled_filters = []
enabled_filters = enabled_filters =
if filters.strip do if filters.read_description do
[Pleroma.Upload.Filter.Exiftool] enabled_filters ++ [Pleroma.Upload.Filter.Exiftool.ReadDescription]
else else
[] enabled_filters
end
enabled_filters =
if filters.strip_metadata do
enabled_filters ++ [Pleroma.Upload.Filter.Exiftool.StripMetadata]
else
enabled_filters
end end
enabled_filters = enabled_filters =
@ -319,13 +347,6 @@ defmodule Mix.Tasks.Pleroma.Instance do
enabled_filters enabled_filters
end end
enabled_filters =
if filters.dedupe do
enabled_filters ++ [Pleroma.Upload.Filter.Dedupe]
else
enabled_filters
end
enabled_filters enabled_filters
end end
end end

View file

@ -11,7 +11,6 @@ defmodule Mix.Tasks.Pleroma.RefreshCounterCache do
alias Pleroma.CounterCache alias Pleroma.CounterCache
alias Pleroma.Repo alias Pleroma.Repo
require Logger
import Ecto.Query import Ecto.Query
def run([]) do def run([]) do

View file

@ -48,7 +48,7 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
] ]
) )
IO.puts("Created indices. Starting to insert posts.") shell_info("Created indices. Starting to insert posts.")
chunk_size = Pleroma.Config.get([Pleroma.Search.Meilisearch, :initial_indexing_chunk_size]) chunk_size = Pleroma.Config.get([Pleroma.Search.Meilisearch, :initial_indexing_chunk_size])
@ -65,7 +65,7 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
) )
count = query |> Pleroma.Repo.aggregate(:count, :data) count = query |> Pleroma.Repo.aggregate(:count, :data)
IO.puts("Entries to index: #{count}") shell_info("Entries to index: #{count}")
Pleroma.Repo.stream( Pleroma.Repo.stream(
query, query,
@ -92,10 +92,10 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
with {:ok, res} <- result do with {:ok, res} <- result do
if not Map.has_key?(res, "indexUid") do if not Map.has_key?(res, "indexUid") do
IO.puts("\nFailed to index: #{inspect(result)}") shell_info("\nFailed to index: #{inspect(result)}")
end end
else else
e -> IO.puts("\nFailed to index due to network error: #{inspect(e)}") e -> shell_error("\nFailed to index due to network error: #{inspect(e)}")
end end
end) end)
|> Stream.run() |> Stream.run()
@ -126,11 +126,15 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
decoded = Jason.decode!(result.body) decoded = Jason.decode!(result.body)
if decoded["results"] do if decoded["results"] do
Enum.each(decoded["results"], fn %{"description" => desc, "key" => key} -> Enum.each(decoded["results"], fn
IO.puts("#{desc}: #{key}") %{"name" => name, "key" => key} ->
shell_info("#{name}: #{key}")
%{"description" => desc, "key" => key} ->
shell_info("#{desc}: #{key}")
end) end)
else else
IO.puts("Error fetching the keys, check the master key is correct: #{inspect(decoded)}") shell_error("Error fetching the keys, check the master key is correct: #{inspect(decoded)}")
end end
end end
@ -138,7 +142,7 @@ defmodule Mix.Tasks.Pleroma.Search.Meilisearch do
start_pleroma() start_pleroma()
{:ok, result} = meili_get("/indexes/objects/stats") {:ok, result} = meili_get("/indexes/objects/stats")
IO.puts("Number of entries: #{result["numberOfDocuments"]}") shell_info("Number of entries: #{result["numberOfDocuments"]}")
IO.puts("Indexing? #{result["isIndexing"]}") shell_info("Indexing? #{result["isIndexing"]}")
end end
end end

View file

@ -0,0 +1,330 @@
# Akkoma: Magically expressive social media
# Copyright © 2024 Akkoma Authors <https://akkoma.dev/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Mix.Tasks.Pleroma.Security do
use Mix.Task
import Ecto.Query
import Mix.Pleroma
alias Pleroma.Config
require Logger
@shortdoc """
Security-related tasks, like e.g. checking for signs past exploits were abused.
"""
# Constants etc
defp local_id_prefix(), do: Pleroma.Web.Endpoint.url() <> "/"
defp local_id_pattern(), do: local_id_prefix() <> "%"
@activity_exts ["activity+json", "activity%2Bjson"]
defp activity_ext_url_patterns() do
for e <- @activity_exts do
for suf <- ["", "?%"] do
# Escape literal % for use in SQL patterns
ee = String.replace(e, "%", "\\%")
"%.#{ee}#{suf}"
end
end
|> List.flatten()
end
# Search for malicious uploads exploiting the lack of Content-Type sanitisation from before 2024-03
def run(["spoof-uploaded"]) do
Logger.put_process_level(self(), :notice)
start_pleroma()
shell_info("""
+------------------------+
| SPOOF SEARCH UPLOADS |
+------------------------+
Checking if any uploads are using privileged types.
NOTE if attachment deletion is enabled, payloads used
in the past may no longer exist.
""")
do_spoof_uploaded()
end
# Fuzzy search for potentially counterfeit activities in the database resulting from the same exploit
def run(["spoof-inserted"]) do
Logger.put_process_level(self(), :notice)
start_pleroma()
shell_info("""
+----------------------+
| SPOOF SEARCH NOTES |
+----------------------+
Starting fuzzy search for counterfeit activities.
NOTE this can not guarantee detecting all counterfeits
and may yield a small percentage of false positives.
""")
do_spoof_inserted()
end
# +-----------------------------+
# | S P O O F - U P L O A D E D |
# +-----------------------------+
defp do_spoof_uploaded() do
files =
case Config.get!([Pleroma.Upload, :uploader]) do
Pleroma.Uploaders.Local ->
uploads_search_spoofs_local_dir(Config.get!([Pleroma.Uploaders.Local, :uploads]))
_ ->
shell_info("""
NOTE:
Not using local uploader; thus not affected by this exploit.
It's impossible to check for files, but in case local uploader was used before
or to check if anyone futilely attempted a spoof, notes will still be scanned.
""")
[]
end
emoji = uploads_search_spoofs_local_dir(Config.get!([:instance, :static_dir]))
post_attachs = uploads_search_spoofs_notes()
not_orphaned_urls =
post_attachs
|> Enum.map(fn {_u, _a, url} -> url end)
|> MapSet.new()
orphaned_attachs = upload_search_orphaned_attachments(not_orphaned_urls)
shell_info("\nSearch concluded; here are the results:")
pretty_print_list_with_title(emoji, "Emoji")
pretty_print_list_with_title(files, "Uploaded Files")
pretty_print_list_with_title(post_attachs, "(Not Deleted) Post Attachments")
pretty_print_list_with_title(orphaned_attachs, "Orphaned Uploads")
shell_info("""
In total found
#{length(emoji)} emoji
#{length(files)} uploads
#{length(post_attachs)} not deleted posts
#{length(orphaned_attachs)} orphaned attachments
""")
end
defp uploads_search_spoofs_local_dir(dir) do
local_dir = String.replace_suffix(dir, "/", "")
shell_info("Searching for suspicious files in #{local_dir}...")
glob_ext = "{" <> Enum.join(@activity_exts, ",") <> "}"
Path.wildcard(local_dir <> "/**/*." <> glob_ext, match_dot: true)
|> Enum.map(fn path ->
String.replace_prefix(path, local_dir <> "/", "")
end)
|> Enum.sort()
end
defp uploads_search_spoofs_notes() do
shell_info("Now querying DB for posts with spoofing attachments. This might take a while...")
patterns = [local_id_pattern() | activity_ext_url_patterns()]
# if jsonb_array_elemsts in FROM can be used with normal Ecto functions, idk how
"""
SELECT DISTINCT a.data->>'actor', a.id, url->>'href'
FROM public.objects AS o JOIN public.activities AS a
ON o.data->>'id' = a.data->>'object',
jsonb_array_elements(o.data->'attachment') AS attachs,
jsonb_array_elements(attachs->'url') AS url
WHERE o.data->>'type' = 'Note' AND
o.data->>'id' LIKE $1::text AND (
url->>'href' LIKE $2::text OR
url->>'href' LIKE $3::text OR
url->>'href' LIKE $4::text OR
url->>'href' LIKE $5::text
)
ORDER BY a.data->>'actor', a.id, url->>'href';
"""
|> Pleroma.Repo.query!(patterns, timeout: :infinity)
|> map_raw_id_apid_tuple()
end
defp upload_search_orphaned_attachments(not_orphaned_urls) do
shell_info("""
Now querying DB for orphaned spoofing attachment (i.e. their post was deleted,
but if :cleanup_attachments was not enabled traces remain in the database)
This might take a bit...
""")
patterns = activity_ext_url_patterns()
"""
SELECT DISTINCT attach.id, url->>'href'
FROM public.objects AS attach,
jsonb_array_elements(attach.data->'url') AS url
WHERE (attach.data->>'type' = 'Image' OR
attach.data->>'type' = 'Document')
AND (
url->>'href' LIKE $1::text OR
url->>'href' LIKE $2::text OR
url->>'href' LIKE $3::text OR
url->>'href' LIKE $4::text
)
ORDER BY attach.id, url->>'href';
"""
|> Pleroma.Repo.query!(patterns, timeout: :infinity)
|> then(fn res -> Enum.map(res.rows, fn [id, url] -> {id, url} end) end)
|> Enum.filter(fn {_, url} -> !(url in not_orphaned_urls) end)
end
# +-----------------------------+
# | S P O O F - I N S E R T E D |
# +-----------------------------+
defp do_spoof_inserted() do
shell_info("""
Searching for local posts whose Create activity has no ActivityPub id...
This is a pretty good indicator, but only for spoofs of local actors
and only if the spoofing happened after around late 2021.
""")
idless_create =
search_local_notes_without_create_id()
|> Enum.sort()
shell_info("Done.\n")
shell_info("""
Now trying to weed out other poorly hidden spoofs.
This can't detect all and may have some false positives.
""")
likely_spoofed_posts_set = MapSet.new(idless_create)
sus_pattern_posts =
search_sus_notes_by_id_patterns()
|> Enum.filter(fn r -> !(r in likely_spoofed_posts_set) end)
shell_info("Done.\n")
shell_info("""
Finally, searching for spoofed, local user accounts.
(It's impossible to detect spoofed remote users)
""")
spoofed_users = search_bogus_local_users()
pretty_print_list_with_title(sus_pattern_posts, "Maybe Spoofed Posts")
pretty_print_list_with_title(idless_create, "Likely Spoofed Posts")
pretty_print_list_with_title(spoofed_users, "Spoofed local user accounts")
shell_info("""
In total found:
#{length(spoofed_users)} bogus users
#{length(idless_create)} likely spoofed posts
#{length(sus_pattern_posts)} maybe spoofed posts
""")
end
defp search_local_notes_without_create_id() do
Pleroma.Object
|> where([o], fragment("?->>'id' LIKE ?", o.data, ^local_id_pattern()))
|> join(:inner, [o], a in Pleroma.Activity,
on: fragment("?->>'object' = ?->>'id'", a.data, o.data)
)
|> where([o, a], fragment("NOT (? \\? 'id') OR ?->>'id' IS NULL", a.data, a.data))
|> select([o, a], {a.id, fragment("?->>'id'", o.data)})
|> order_by([o, a], a.id)
|> Pleroma.Repo.all(timeout: :infinity)
end
defp search_sus_notes_by_id_patterns() do
[ep1, ep2, ep3, ep4] = activity_ext_url_patterns()
Pleroma.Object
|> where(
[o],
# for local objects we know exactly how a genuine id looks like
# (though a thorough attacker can emulate this)
# for remote posts, use some best-effort patterns
fragment(
"""
(?->>'id' LIKE ? AND ?->>'id' NOT SIMILAR TO
? || 'objects/[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}')
""",
o.data,
^local_id_pattern(),
o.data,
^local_id_prefix()
) or
fragment("?->>'id' LIKE ?", o.data, "%/emoji/%") or
fragment("?->>'id' LIKE ?", o.data, "%/media/%") or
fragment("?->>'id' LIKE ?", o.data, "%/proxy/%") or
fragment("?->>'id' LIKE ?", o.data, ^ep1) or
fragment("?->>'id' LIKE ?", o.data, ^ep2) or
fragment("?->>'id' LIKE ?", o.data, ^ep3) or
fragment("?->>'id' LIKE ?", o.data, ^ep4)
)
|> join(:inner, [o], a in Pleroma.Activity,
on: fragment("?->>'object' = ?->>'id'", a.data, o.data)
)
|> select([o, a], {a.id, fragment("?->>'id'", o.data)})
|> order_by([o, a], a.id)
|> Pleroma.Repo.all(timeout: :infinity)
end
defp search_bogus_local_users() do
Pleroma.User.Query.build(%{})
|> where([u], u.local == false and like(u.ap_id, ^local_id_pattern()))
|> order_by([u], u.ap_id)
|> select([u], u.ap_id)
|> Pleroma.Repo.all(timeout: :infinity)
end
# +-----------------------------------+
# | module-specific utility functions |
# +-----------------------------------+
defp pretty_print_list_with_title(list, title) do
title_len = String.length(title)
title_underline = String.duplicate("=", title_len)
shell_info(title)
shell_info(title_underline)
pretty_print_list(list)
end
defp pretty_print_list([]), do: shell_info("")
defp pretty_print_list([{a, o} | rest])
when (is_binary(a) or is_number(a)) and is_binary(o) do
shell_info(" {#{a}, #{o}}")
pretty_print_list(rest)
end
defp pretty_print_list([{u, a, o} | rest])
when is_binary(a) and is_binary(u) and is_binary(o) do
shell_info(" {#{u}, #{a}, #{o}}")
pretty_print_list(rest)
end
defp pretty_print_list([e | rest]) when is_binary(e) do
shell_info(" #{e}")
pretty_print_list(rest)
end
defp pretty_print_list([e | rest]), do: pretty_print_list([inspect(e) | rest])
defp map_raw_id_apid_tuple(res) do
user_prefix = local_id_prefix() <> "users/"
Enum.map(res.rows, fn
[uid, aid, oid] ->
{
String.replace_prefix(uid, user_prefix, ""),
FlakeId.to_string(aid),
oid
}
end)
end
end

View file

@ -114,7 +114,7 @@ defmodule Mix.Tasks.Pleroma.User do
{:ok, token} <- Pleroma.PasswordResetToken.create_token(user) do {:ok, token} <- Pleroma.PasswordResetToken.create_token(user) do
shell_info("Generated password reset token for #{user.nickname}") shell_info("Generated password reset token for #{user.nickname}")
IO.puts("URL: #{~p[/api/v1/pleroma/password_reset/#{token.token}]}") shell_info("URL: #{~p[/api/v1/pleroma/password_reset/#{token.token}]}")
else else
_ -> _ ->
shell_error("No local user #{nickname}") shell_error("No local user #{nickname}")
@ -301,7 +301,7 @@ defmodule Mix.Tasks.Pleroma.User do
shell_info("Generated user invite token " <> String.replace(invite.invite_type, "_", " ")) shell_info("Generated user invite token " <> String.replace(invite.invite_type, "_", " "))
url = url(~p[/registration/#{invite.token}]) url = url(~p[/registration/#{invite.token}])
IO.puts(url) shell_info(url)
else else
error -> error ->
shell_error("Could not create invite token: #{inspect(error)}") shell_error("Could not create invite token: #{inspect(error)}")
@ -373,7 +373,7 @@ defmodule Mix.Tasks.Pleroma.User do
nickname nickname
|> User.get_cached_by_nickname() |> User.get_cached_by_nickname()
shell_info("#{inspect(user)}") shell_info(user)
end end
def run(["send_confirmation", nickname]) do def run(["send_confirmation", nickname]) do
@ -457,7 +457,7 @@ defmodule Mix.Tasks.Pleroma.User do
with %User{local: true} = user <- User.get_cached_by_nickname(nickname) do with %User{local: true} = user <- User.get_cached_by_nickname(nickname) do
blocks = User.following_ap_ids(user) blocks = User.following_ap_ids(user)
IO.puts("#{inspect(blocks)}") shell_info(blocks)
end end
end end
@ -516,12 +516,12 @@ defmodule Mix.Tasks.Pleroma.User do
{:follow_data, Pleroma.Web.ActivityPub.Utils.fetch_latest_follow(local, remote)} do {:follow_data, Pleroma.Web.ActivityPub.Utils.fetch_latest_follow(local, remote)} do
calculated_state = User.following?(local, remote) calculated_state = User.following?(local, remote)
IO.puts( shell_info(
"Request state is #{request_state}, vs calculated state of following=#{calculated_state}" "Request state is #{request_state}, vs calculated state of following=#{calculated_state}"
) )
if calculated_state == false && request_state == "accept" do if calculated_state == false && request_state == "accept" do
IO.puts("Discrepancy found, fixing") shell_info("Discrepancy found, fixing")
Pleroma.Web.CommonAPI.reject_follow_request(local, remote) Pleroma.Web.CommonAPI.reject_follow_request(local, remote)
shell_info("Relationship fixed") shell_info("Relationship fixed")
else else
@ -551,14 +551,14 @@ defmodule Mix.Tasks.Pleroma.User do
|> Stream.each(fn users -> |> Stream.each(fn users ->
users users
|> Enum.each(fn user -> |> Enum.each(fn user ->
IO.puts("Re-Resolving: #{user.ap_id}") shell_info("Re-Resolving: #{user.ap_id}")
with {:ok, user} <- Pleroma.User.fetch_by_ap_id(user.ap_id), with {:ok, user} <- Pleroma.User.fetch_by_ap_id(user.ap_id),
changeset <- Pleroma.User.update_changeset(user), changeset <- Pleroma.User.update_changeset(user),
{:ok, _user} <- Pleroma.User.update_and_set_cache(changeset) do {:ok, _user} <- Pleroma.User.update_and_set_cache(changeset) do
:ok :ok
else else
error -> IO.puts("Could not resolve: #{user.ap_id}, #{inspect(error)}") error -> shell_info("Could not resolve: #{user.ap_id}, #{inspect(error)}")
end end
end) end)
end) end)

View file

@ -258,6 +258,27 @@ defmodule Pleroma.Activity do
def get_create_by_object_ap_id(_), do: nil def get_create_by_object_ap_id(_), do: nil
@doc """
Accepts a list of `ap__id`.
Returns a query yielding Create activities for the given objects,
in the same order as they were specified in the input list.
"""
@spec get_presorted_create_by_object_ap_id([String.t()]) :: Ecto.Queryable.t()
def get_presorted_create_by_object_ap_id(ap_ids) do
from(
a in Activity,
join:
ids in fragment(
"SELECT * FROM UNNEST(?::text[]) WITH ORDINALITY AS ids(ap_id, ord)",
^ap_ids
),
on:
ids.ap_id == fragment("?->>'object'", a.data) and
fragment("?->>'type'", a.data) == "Create",
order_by: [asc: ids.ord]
)
end
@doc """ @doc """
Accepts `ap_id` or list of `ap_id`. Accepts `ap_id` or list of `ap_id`.
Returns a query. Returns a query.

View file

@ -28,7 +28,7 @@ defmodule Pleroma.Activity.HTML do
end end
end end
defp add_cache_key_for(activity_id, additional_key) do def add_cache_key_for(activity_id, additional_key) do
current = get_cache_keys_for(activity_id) current = get_cache_keys_for(activity_id)
unless additional_key in current do unless additional_key in current do

View file

@ -26,6 +26,15 @@ defmodule Pleroma.Activity.Pruner do
|> Repo.delete_all(timeout: :infinity) |> Repo.delete_all(timeout: :infinity)
end end
def prune_updates do
before_time = cutoff()
from(a in Activity,
where: fragment("?->>'type' = ?", a.data, "Update") and a.inserted_at < ^before_time
)
|> Repo.delete_all(timeout: :infinity)
end
def prune_removes do def prune_removes do
before_time = cutoff() before_time = cutoff()

View file

@ -95,34 +95,17 @@ defmodule Pleroma.Application do
opts = [strategy: :one_for_one, name: Pleroma.Supervisor, max_restarts: max_restarts] opts = [strategy: :one_for_one, name: Pleroma.Supervisor, max_restarts: max_restarts]
with {:ok, data} <- Supervisor.start_link(children, opts) do case Supervisor.start_link(children, opts) do
set_postgres_server_version() {:ok, data} ->
{:ok, data} {:ok, data}
else
e -> e ->
Logger.error("Failed to start!") Logger.critical("Failed to start!")
Logger.error("#{inspect(e)}") Logger.critical("#{inspect(e)}")
e e
end end
end end
defp set_postgres_server_version do
version =
with %{rows: [[version]]} <- Ecto.Adapters.SQL.query!(Pleroma.Repo, "show server_version"),
{num, _} <- Float.parse(version) do
num
else
e ->
Logger.warning(
"Could not get the postgres version: #{inspect(e)}.\nSetting the default value of 9.6"
)
9.6
end
:persistent_term.put({Pleroma.Repo, :postgres_version}, version)
end
def load_custom_modules do def load_custom_modules do
dir = Config.get([:modules, :runtime_dir]) dir = Config.get([:modules, :runtime_dir])
@ -179,7 +162,9 @@ defmodule Pleroma.Application do
build_cachex("translations", default_ttl: :timer.hours(24 * 30), limit: 2500), build_cachex("translations", default_ttl: :timer.hours(24 * 30), limit: 2500),
build_cachex("instances", default_ttl: :timer.hours(24), ttl_interval: 1000, limit: 2500), build_cachex("instances", default_ttl: :timer.hours(24), ttl_interval: 1000, limit: 2500),
build_cachex("request_signatures", default_ttl: :timer.hours(24 * 30), limit: 3000), build_cachex("request_signatures", default_ttl: :timer.hours(24 * 30), limit: 3000),
build_cachex("rel_me", default_ttl: :timer.hours(24 * 30), limit: 300) build_cachex("rel_me", default_ttl: :timer.hours(24 * 30), limit: 300),
build_cachex("host_meta", default_ttl: :timer.minutes(120), limit: 5000),
build_cachex("http_backoff", default_ttl: :timer.hours(24 * 30), limit: 10000)
] ]
end end
@ -279,7 +264,9 @@ defmodule Pleroma.Application do
defp http_children do defp http_children do
proxy_url = Config.get([:http, :proxy_url]) proxy_url = Config.get([:http, :proxy_url])
proxy = Pleroma.HTTP.AdapterHelper.format_proxy(proxy_url) proxy = Pleroma.HTTP.AdapterHelper.format_proxy(proxy_url)
pool_size = Config.get([:http, :pool_size]) pool_size = Config.get([:http, :pool_size], 10)
pool_timeout = Config.get([:http, :pool_timeout], 60_000)
connection_timeout = Config.get([:http, :conn_max_idle_time], 10_000)
:public_key.cacerts_load() :public_key.cacerts_load()
@ -288,6 +275,9 @@ defmodule Pleroma.Application do
|> Config.get([]) |> Config.get([])
|> Pleroma.HTTP.AdapterHelper.add_pool_size(pool_size) |> Pleroma.HTTP.AdapterHelper.add_pool_size(pool_size)
|> Pleroma.HTTP.AdapterHelper.maybe_add_proxy_pool(proxy) |> Pleroma.HTTP.AdapterHelper.maybe_add_proxy_pool(proxy)
|> Pleroma.HTTP.AdapterHelper.ensure_ipv6()
|> Pleroma.HTTP.AdapterHelper.add_default_conn_max_idle_time(connection_timeout)
|> Pleroma.HTTP.AdapterHelper.add_default_pool_max_idle_time(pool_timeout)
|> Keyword.put(:name, MyFinch) |> Keyword.put(:name, MyFinch)
[{Finch, config}] [{Finch, config}]

View file

@ -164,7 +164,8 @@ defmodule Pleroma.ApplicationRequirements do
defp check_system_commands!(:ok) do defp check_system_commands!(:ok) do
filter_commands_statuses = [ filter_commands_statuses = [
check_filter(Pleroma.Upload.Filter.Exiftool, "exiftool"), check_filter(Pleroma.Upload.Filter.Exiftool.StripMetadata, "exiftool"),
check_filter(Pleroma.Upload.Filter.Exiftool.ReadDescription, "exiftool"),
check_filter(Pleroma.Upload.Filter.Mogrify, "mogrify"), check_filter(Pleroma.Upload.Filter.Mogrify, "mogrify"),
check_filter(Pleroma.Upload.Filter.Mogrifun, "mogrify"), check_filter(Pleroma.Upload.Filter.Mogrifun, "mogrify"),
check_filter(Pleroma.Upload.Filter.AnalyzeMetadata, "mogrify"), check_filter(Pleroma.Upload.Filter.AnalyzeMetadata, "mogrify"),

View file

@ -68,7 +68,10 @@ defmodule Akkoma.Collections.Fetcher do
items items
end end
else else
{:error, {"Object has been deleted", _, _}} -> {:error, :not_found} ->
items
{:error, :forbidden} ->
items items
{:error, error} -> {:error, error} ->

View file

@ -1,112 +0,0 @@
defmodule Pleroma.Config.ConfigurableFromDatabase do
alias Pleroma.Config
# Basically it's silly to let this be configurable
# set a list of things that we can set in the database
# this is mostly our stuff, with some extra in there
@allowed_groups [
{:logger},
{:pleroma, Pleroma.Captcha},
{:pleroma, Pleroma.Captcha.Kocaptcha},
{:pleroma, Pleroma.Upload},
{:pleroma, Pleroma.Uploaders.Local},
{:pleroma, Pleroma.Uploaders.S3},
{:pleroma, :auth},
{:pleroma, :emoji},
{:pleroma, :http},
{:pleroma, :instance},
{:pleroma, :welcome},
{:pleroma, :feed},
{:pleroma, :markup},
{:pleroma, :frontend_configurations},
{:pleroma, :assets},
{:pleroma, :manifest},
{:pleroma, :activitypub},
{:pleroma, :streamer},
{:pleroma, :user},
{:pleroma, :mrf_normalize_markup},
{:pleroma, :mrf_rejectnonpublic},
{:pleroma, :mrf_hellthread},
{:pleroma, :mrf_simple},
{:pleroma, :mrf_keyword},
{:pleroma, :mrf_hashtag},
{:pleroma, :mrf_subchain},
{:pleroma, :mrf_activity_expiration},
{:pleroma, :mrf_vocabulary},
{:pleroma, :mrf_inline_quote},
{:pleroma, :mrf_object_age},
{:pleroma, :mrf_follow_bot},
{:pleroma, :mrf_reject_newly_created_account_notes},
{:pleroma, :rich_media},
{:pleroma, :media_proxy},
{:pleroma, Pleroma.Web.MediaProxy.Invalidation.Http},
{:pleroma, :media_preview_proxy},
{:pleroma, Pleroma.Web.Metadata},
{:pleroma, Pleroma.Web.Metadata.Providers.Theme},
{:pleroma, Pleroma.Web.Preload},
{:pleroma, :http_security},
{:pleroma, Pleroma.User},
{:pleroma, Oban},
{:pleroma, :workers},
{:pleroma, Pleroma.Formatter},
{:pleroma, Pleroma.Emails.Mailer},
{:pleroma, Pleroma.Emails.UserEmail},
{:pleroma, Pleroma.Emails.NewUsersDigestEmail},
{:pleroma, Pleroma.ScheduledActivity},
{:pleroma, :email_notifications},
{:pleroma, :oauth2},
{:pleroma, Pleroma.Web.Plugs.RemoteIp},
{:pleroma, :static_fe},
{:pleroma, :frontends},
{:pleroma, :web_cache_ttl},
{:pleroma, :majic_pool},
{:pleroma, :restrict_unauthenticated},
{:pleroma, Pleroma.Web.ApiSpec.CastAndValidate},
{:pleroma, :mrf},
{:pleroma, :instances_favicons},
{:pleroma, :instances_nodeinfo},
{:pleroma, Pleroma.User.Backup},
{:pleroma, ConcurrentLimiter},
{:pleroma, Pleroma.Web.WebFinger},
{:pleroma, Pleroma.Search},
{:pleroma, Pleroma.Search.Meilisearch},
{:pleroma, Pleroma.Search.Elasticsearch.Cluster},
{:pleroma, :translator},
{:pleroma, :deepl},
{:pleroma, :libre_translate},
# But not argostranslate, because executables!
{:pleroma, Pleroma.Upload.Filter.AnonymizeFilename},
{:pleroma, Pleroma.Upload.Filter.Mogrify},
{:pleroma, Pleroma.Workers.PurgeExpiredActivity},
{:pleroma, :rate_limit}
]
def allowed_groups, do: @allowed_groups
def enabled, do: Config.get(:configurable_from_database)
# the whitelist check can be called from either the loader or the
# doc generator, which is spitting out strings
defp maybe_stringified_atom_equal(a, b) do
a == inspect(b) || a == b
end
def whitelisted_config?(group, key) do
allowed_groups()
|> Enum.any?(fn
{whitelisted_group} ->
maybe_stringified_atom_equal(group, whitelisted_group)
{whitelisted_group, whitelisted_key} ->
maybe_stringified_atom_equal(group, whitelisted_group) && maybe_stringified_atom_equal(key, whitelisted_key)
end)
end
def whitelisted_config?(%{group: group, key: key}) do
whitelisted_config?(group, key)
end
def whitelisted_config?(%{group: group} = config) do
whitelisted_config?(group, config[:key])
end
end

View file

@ -22,6 +22,43 @@ defmodule Pleroma.Config.DeprecationWarnings do
"\n* `config :pleroma, :instance, :quarantined_instances` is now covered by `:pleroma, :mrf_simple, :reject`"} "\n* `config :pleroma, :instance, :quarantined_instances` is now covered by `:pleroma, :mrf_simple, :reject`"}
] ]
def check_exiftool_filter do
filters = Config.get([Pleroma.Upload]) |> Keyword.get(:filters, [])
if Pleroma.Upload.Filter.Exiftool in filters do
Logger.warning("""
!!!DEPRECATION WARNING!!!
Your config is using Exiftool as a filter instead of Exiftool.StripMetadata. This should work for now, but you are advised to change to the new configuration to prevent possible issues later:
```
config :pleroma, Pleroma.Upload,
filters: [Pleroma.Upload.Filter.Exiftool]
```
Is now
```
config :pleroma, Pleroma.Upload,
filters: [Pleroma.Upload.Filter.Exiftool.StripMetadata]
```
""")
new_config =
filters
|> Enum.map(fn
Pleroma.Upload.Filter.Exiftool -> Pleroma.Upload.Filter.Exiftool.StripMetadata
filter -> filter
end)
Config.put([Pleroma.Upload, :filters], new_config)
:error
else
:ok
end
end
def check_simple_policy_tuples do def check_simple_policy_tuples do
has_strings = has_strings =
Config.get([:mrf_simple]) Config.get([:mrf_simple])
@ -182,7 +219,10 @@ defmodule Pleroma.Config.DeprecationWarnings do
check_quarantined_instances_tuples(), check_quarantined_instances_tuples(),
check_transparency_exclusions_tuples(), check_transparency_exclusions_tuples(),
check_simple_policy_tuples(), check_simple_policy_tuples(),
check_http_adapter() check_http_adapter(),
check_uploader_base_url_set(),
check_uploader_base_url_is_not_base_domain(),
check_exiftool_filter()
] ]
|> Enum.reduce(:ok, fn |> Enum.reduce(:ok, fn
:ok, :ok -> :ok :ok, :ok -> :ok
@ -337,4 +377,54 @@ defmodule Pleroma.Config.DeprecationWarnings do
:ok :ok
end end
end end
def check_uploader_base_url_set() do
uses_local_uploader? = Config.get([Pleroma.Upload, :uploader]) == Pleroma.Uploaders.Local
base_url = Pleroma.Config.get([Pleroma.Upload, :base_url])
if base_url || !uses_local_uploader? do
:ok
else
Logger.error("""
!!!WARNING!!!
Your config does not specify a base_url for uploads!
Please make the following change:\n
\n* `config :pleroma, Pleroma.Upload, base_url: "https://example.com/media/`
\n
\nPlease note that it is HEAVILY recommended to use a subdomain to host user-uploaded media!
""")
# This is a hard exit - the uploader will not work without a base_url
raise ArgumentError, message: "No base_url set for uploads - please set one in your config!"
end
end
def check_uploader_base_url_is_not_base_domain() do
uses_local_uploader? = Config.get([Pleroma.Upload, :uploader]) == Pleroma.Uploaders.Local
uploader_host =
[Pleroma.Upload, :base_url]
|> Pleroma.Config.get()
|> URI.parse()
|> Map.get(:host)
akkoma_host =
[Pleroma.Web.Endpoint, :url]
|> Pleroma.Config.get()
|> Keyword.get(:host)
if uploader_host == akkoma_host && uses_local_uploader? do
Logger.error("""
!!!WARNING!!!
Your Akkoma Host and your Upload base_url's host are the same!
This can potentially be insecure!
It is HIGHLY recommended that you migrate your media uploads
to a subdomain at your earliest convenience
""")
end
# This isn't actually an error condition, just a warning
:ok
end
end end

View file

@ -23,7 +23,7 @@ defmodule Pleroma.Config.ReleaseRuntimeProvider do
with_runtime_config = with_runtime_config =
if File.exists?(config_path) do if File.exists?(config_path) do
# <https://git.pleroma.social/pleroma/pleroma/-/issues/3135> # <https://git.pleroma.social/pleroma/pleroma/-/issues/3135>
%File.Stat{mode: mode} = File.lstat!(config_path) %File.Stat{mode: mode} = File.stat!(config_path)
if Bitwise.band(mode, 0o007) > 0 do if Bitwise.band(mode, 0o007) > 0 do
raise "Configuration at #{config_path} has world-permissions, execute the following: chmod o= #{config_path}" raise "Configuration at #{config_path} has world-permissions, execute the following: chmod o= #{config_path}"

View file

@ -8,7 +8,6 @@ defmodule Pleroma.Config.TransferTask do
alias Pleroma.Config alias Pleroma.Config
alias Pleroma.ConfigDB alias Pleroma.ConfigDB
alias Pleroma.Repo alias Pleroma.Repo
alias Pleroma.Config.ConfigurableFromDatabase
require Logger require Logger
@ -25,7 +24,6 @@ defmodule Pleroma.Config.TransferTask do
defp reboot_time_subkeys, defp reboot_time_subkeys,
do: [ do: [
{:pleroma, Pleroma.Captcha, [:seconds_valid]}, {:pleroma, Pleroma.Captcha, [:seconds_valid]},
{:pleroma, Pleroma.Upload, [:proxy_remote]},
{:pleroma, :instance, [:upload_limit]}, {:pleroma, :instance, [:upload_limit]},
{:pleroma, :http, [:pool_size]}, {:pleroma, :http, [:pool_size]},
{:pleroma, :http, [:proxy_url]} {:pleroma, :http, [:proxy_url]}
@ -92,18 +90,6 @@ defmodule Pleroma.Config.TransferTask do
defp invalid_key_or_group(_), do: false defp invalid_key_or_group(_), do: false
defp merge_with_default(%{group: group, key: key, value: value} = setting) do defp merge_with_default(%{group: group, key: key, value: value} = setting) do
if !ConfigurableFromDatabase.whitelisted_config?(setting) do
Logger.warning(~s[
config #{inspect(group)}, #{inspect(key)} is set in the database,
but it is not explicitly allowed to be there. Consider removing it
with
MIX: mix pleroma.config delete #{group} #{key}
OTP: ./bin/pleroma_ctl config delete #{group} #{key}
and setting it in your .exs file instead
config #{inspect(group)}, #{inspect(key)}, #{inspect(value)}
])
end
default = default =
if group == :pleroma do if group == :pleroma do
Config.get([key], Config.Holder.default_config(group, key)) Config.get([key], Config.Holder.default_config(group, key))

View file

@ -64,4 +64,7 @@ defmodule Pleroma.Constants do
"Service" "Service"
] ]
) )
# Internally used as top-level types for media attachments and user images
const(attachment_types, do: ["Document", "Image"])
end end

View file

@ -55,12 +55,61 @@ defmodule Pleroma.Emails.Mailer do
@doc false @doc false
def validate_dependency do def validate_dependency do
parse_config([]) parse_config([], defaults: false)
|> Keyword.get(:adapter) |> Keyword.get(:adapter)
|> Swoosh.Mailer.validate_dependency() |> Swoosh.Mailer.validate_dependency()
end end
defp parse_config(config) do defp ensure_charlist(input) do
Swoosh.Mailer.parse_config(@otp_app, __MODULE__, @mailer_config, config) case input do
i when is_binary(i) -> String.to_charlist(input)
i when is_list(i) -> i
end
end
defp default_config(adapter, conf, opts)
defp default_config(_, _, defaults: false) do
[]
end
defp default_config(Swoosh.Adapters.SMTP, conf, _) do
# gen_smtp and Erlang's tls defaults are very barebones, if nothing is set.
# Add sane defaults for our usecase to make config less painful for admins
relay = ensure_charlist(Keyword.get(conf, :relay))
ssl_disabled = Keyword.get(conf, :ssl) === false
os_cacerts = :public_key.cacerts_get()
common_tls_opts = [
cacerts: os_cacerts,
versions: [:"tlsv1.2", :"tlsv1.3"],
verify: :verify_peer,
# some versions have supposedly issues verifying wildcard certs without this
server_name_indication: relay,
# the default of 10 is too restrictive
depth: 32
]
[
auth: :always,
no_mx_lookups: false,
# Direct SSL/TLS
# (if ssl was explicitly disabled, we must not pass TLS options to the socket)
ssl: true,
sockopts: if(ssl_disabled, do: [], else: common_tls_opts),
# STARTTLS upgrade (can't be set to :always when already using direct TLS)
tls: :if_available,
tls_options: common_tls_opts
]
end
defp default_config(_, _, _), do: []
defp parse_config(config, opts \\ []) do
conf = Swoosh.Mailer.parse_config(@otp_app, __MODULE__, @mailer_config, config)
adapter = Keyword.get(conf, :adapter)
default_config(adapter, conf, opts)
|> Keyword.merge(conf)
end end
end end

View file

@ -74,7 +74,7 @@ defmodule Pleroma.Emails.UserEmail do
def password_reset_email(user, token) when is_binary(token) do def password_reset_email(user, token) when is_binary(token) do
Gettext.with_locale_or_default user.language do Gettext.with_locale_or_default user.language do
password_reset_url = ~p[/api/v1/pleroma/password_reset/#{token}] password_reset_url = url(~p[/api/v1/pleroma/password_reset/#{token}])
html_body = html_body =
Gettext.dpgettext( Gettext.dpgettext(
@ -107,7 +107,7 @@ defmodule Pleroma.Emails.UserEmail do
to_name \\ nil to_name \\ nil
) do ) do
Gettext.with_locale_or_default user.language do Gettext.with_locale_or_default user.language do
registration_url = ~p[/registration/#{user_invite_token.token}] registration_url = url(~p[/registration/#{user_invite_token.token}])
html_body = html_body =
Gettext.dpgettext( Gettext.dpgettext(
@ -140,7 +140,7 @@ defmodule Pleroma.Emails.UserEmail do
def account_confirmation_email(user) do def account_confirmation_email(user) do
Gettext.with_locale_or_default user.language do Gettext.with_locale_or_default user.language do
confirmation_url = ~p[/api/account/confirm_email/#{user.id}/#{user.confirmation_token}] confirmation_url = url(~p[/api/account/confirm_email/#{user.id}/#{user.confirmation_token}])
html_body = html_body =
Gettext.dpgettext( Gettext.dpgettext(
@ -330,7 +330,7 @@ defmodule Pleroma.Emails.UserEmail do
|> Pleroma.JWT.generate_and_sign!() |> Pleroma.JWT.generate_and_sign!()
|> Base.encode64() |> Base.encode64()
~p[/mailer/unsubscribe/#{token}] url(~p[/mailer/unsubscribe/#{token}])
end end
def backup_is_ready_email(backup, admin_user_id \\ nil) do def backup_is_ready_email(backup, admin_user_id \\ nil) do

View file

@ -26,12 +26,37 @@ defmodule Pleroma.Emoji.Pack do
alias Pleroma.Emoji.Pack alias Pleroma.Emoji.Pack
alias Pleroma.Utils alias Pleroma.Utils
# Invalid/Malicious names are supposed to be filtered out before path joining,
# but there are many entrypoints to affected functions so as the code changes
# we might accidentally let an unsanitised name slip through.
# To make sure, use the below which crash the process otherwise.
# ALWAYS use this when constructing paths from external name!
# (name meaning it must be only a single path component)
defp path_join_name_safe(dir, name) do
if to_string(name) != Path.basename(name) or name in ["..", ".", ""] do
raise "Invalid or malicious pack name: #{name}"
else
Path.join(dir, name)
end
end
# ALWAYS use this to join external paths
# (which are allowed to have several components)
defp path_join_safe(dir, path) do
{:ok, safe_path} = Path.safe_relative(path)
Path.join(dir, safe_path)
end
@spec create(String.t()) :: {:ok, t()} | {:error, File.posix()} | {:error, :empty_values} @spec create(String.t()) :: {:ok, t()} | {:error, File.posix()} | {:error, :empty_values}
def create(name) do def create(name) do
with :ok <- validate_not_empty([name]), with :ok <- validate_not_empty([name]),
dir <- Path.join(emoji_path(), name), dir <- path_join_name_safe(emoji_path(), name),
:ok <- File.mkdir(dir) do :ok <- File.mkdir(dir) do
save_pack(%__MODULE__{pack_file: Path.join(dir, "pack.json")}) save_pack(%__MODULE__{
path: dir,
pack_file: Path.join(dir, "pack.json")
})
end end
end end
@ -65,7 +90,7 @@ defmodule Pleroma.Emoji.Pack do
{:ok, [binary()]} | {:error, File.posix(), binary()} | {:error, :empty_values} {:ok, [binary()]} | {:error, File.posix(), binary()} | {:error, :empty_values}
def delete(name) do def delete(name) do
with :ok <- validate_not_empty([name]), with :ok <- validate_not_empty([name]),
pack_path <- Path.join(emoji_path(), name) do pack_path <- path_join_name_safe(emoji_path(), name) do
File.rm_rf(pack_path) File.rm_rf(pack_path)
end end
end end
@ -89,7 +114,7 @@ defmodule Pleroma.Emoji.Pack do
end) end)
end end
@spec add_file(t(), String.t(), Path.t(), Plug.Upload.t()) :: @spec add_file(t(), String.t(), Path.t(), Plug.Upload.t() | binary()) ::
{:ok, t()} {:ok, t()}
| {:error, File.posix() | atom()} | {:error, File.posix() | atom()}
def add_file(%Pack{} = pack, _, _, %Plug.Upload{content_type: "application/zip"} = file) do def add_file(%Pack{} = pack, _, _, %Plug.Upload{content_type: "application/zip"} = file) do
@ -107,7 +132,7 @@ defmodule Pleroma.Emoji.Pack do
Enum.map_reduce(emojies, pack, fn item, emoji_pack -> Enum.map_reduce(emojies, pack, fn item, emoji_pack ->
emoji_file = %Plug.Upload{ emoji_file = %Plug.Upload{
filename: item[:filename], filename: item[:filename],
path: Path.join(tmp_dir, item[:path]) path: path_join_safe(tmp_dir, item[:path])
} }
{:ok, updated_pack} = {:ok, updated_pack} =
@ -137,6 +162,14 @@ defmodule Pleroma.Emoji.Pack do
end end
def add_file(%Pack{} = pack, shortcode, filename, %Plug.Upload{} = file) do def add_file(%Pack{} = pack, shortcode, filename, %Plug.Upload{} = file) do
try_add_file(pack, shortcode, filename, file)
end
def add_file(%Pack{} = pack, shortcode, filename, filedata) when is_binary(filedata) do
try_add_file(pack, shortcode, filename, filedata)
end
defp try_add_file(%Pack{} = pack, shortcode, filename, file) do
with :ok <- validate_not_empty([shortcode, filename]), with :ok <- validate_not_empty([shortcode, filename]),
:ok <- validate_emoji_not_exists(shortcode), :ok <- validate_emoji_not_exists(shortcode),
{:ok, updated_pack} <- do_add_file(pack, shortcode, filename, file) do {:ok, updated_pack} <- do_add_file(pack, shortcode, filename, file) do
@ -189,6 +222,7 @@ defmodule Pleroma.Emoji.Pack do
{:ok, results} <- File.ls(emoji_path) do {:ok, results} <- File.ls(emoji_path) do
names = names =
results results
# items come from File.ls, thus safe
|> Enum.map(&Path.join(emoji_path, &1)) |> Enum.map(&Path.join(emoji_path, &1))
|> Enum.reject(fn path -> |> Enum.reject(fn path ->
File.dir?(path) and File.exists?(Path.join(path, "pack.json")) File.dir?(path) and File.exists?(Path.join(path, "pack.json"))
@ -287,8 +321,8 @@ defmodule Pleroma.Emoji.Pack do
@spec load_pack(String.t()) :: {:ok, t()} | {:error, :file.posix()} @spec load_pack(String.t()) :: {:ok, t()} | {:error, :file.posix()}
def load_pack(name) do def load_pack(name) do
name = Path.basename(name) pack_dir = path_join_name_safe(emoji_path(), name)
pack_file = Path.join([emoji_path(), name, "pack.json"]) pack_file = Path.join(pack_dir, "pack.json")
with {:ok, _} <- File.stat(pack_file), with {:ok, _} <- File.stat(pack_file),
{:ok, pack_data} <- File.read(pack_file) do {:ok, pack_data} <- File.read(pack_file) do
@ -412,7 +446,13 @@ defmodule Pleroma.Emoji.Pack do
end end
defp create_archive_and_cache(pack, hash) do defp create_archive_and_cache(pack, hash) do
files = [~c"pack.json" | Enum.map(pack.files, fn {_, file} -> to_charlist(file) end)] files = [
~c"pack.json"
| Enum.map(pack.files, fn {_, file} ->
{:ok, file} = Path.safe_relative(file)
to_charlist(file)
end)
]
{:ok, {_, result}} = {:ok, {_, result}} =
:zip.zip(~c"#{pack.name}.zip", files, [:memory, cwd: to_charlist(pack.path)]) :zip.zip(~c"#{pack.name}.zip", files, [:memory, cwd: to_charlist(pack.path)])
@ -474,7 +514,7 @@ defmodule Pleroma.Emoji.Pack do
end end
defp save_file(%Plug.Upload{path: upload_path}, pack, filename) do defp save_file(%Plug.Upload{path: upload_path}, pack, filename) do
file_path = Path.join(pack.path, filename) file_path = path_join_safe(pack.path, filename)
create_subdirs(file_path) create_subdirs(file_path)
with {:ok, _} <- File.copy(upload_path, file_path) do with {:ok, _} <- File.copy(upload_path, file_path) do
@ -482,6 +522,12 @@ defmodule Pleroma.Emoji.Pack do
end end
end end
defp save_file(file_data, pack, filename) when is_binary(file_data) do
file_path = path_join_safe(pack.path, filename)
create_subdirs(file_path)
File.write(file_path, file_data, [:binary])
end
defp put_emoji(pack, shortcode, filename) do defp put_emoji(pack, shortcode, filename) do
files = Map.put(pack.files, shortcode, filename) files = Map.put(pack.files, shortcode, filename)
%{pack | files: files, files_count: length(Map.keys(files))} %{pack | files: files, files_count: length(Map.keys(files))}
@ -493,8 +539,8 @@ defmodule Pleroma.Emoji.Pack do
end end
defp rename_file(pack, filename, new_filename) do defp rename_file(pack, filename, new_filename) do
old_path = Path.join(pack.path, filename) old_path = path_join_safe(pack.path, filename)
new_path = Path.join(pack.path, new_filename) new_path = path_join_safe(pack.path, new_filename)
create_subdirs(new_path) create_subdirs(new_path)
with :ok <- File.rename(old_path, new_path) do with :ok <- File.rename(old_path, new_path) do
@ -512,7 +558,7 @@ defmodule Pleroma.Emoji.Pack do
defp remove_file(pack, shortcode) do defp remove_file(pack, shortcode) do
with {:ok, filename} <- get_filename(pack, shortcode), with {:ok, filename} <- get_filename(pack, shortcode),
emoji <- Path.join(pack.path, filename), emoji <- path_join_safe(pack.path, filename),
:ok <- File.rm(emoji) do :ok <- File.rm(emoji) do
remove_dir_if_empty(emoji, filename) remove_dir_if_empty(emoji, filename)
end end
@ -530,7 +576,7 @@ defmodule Pleroma.Emoji.Pack do
defp get_filename(pack, shortcode) do defp get_filename(pack, shortcode) do
with %{^shortcode => filename} when is_binary(filename) <- pack.files, with %{^shortcode => filename} when is_binary(filename) <- pack.files,
file_path <- Path.join(pack.path, filename), file_path <- path_join_safe(pack.path, filename),
{:ok, _} <- File.stat(file_path) do {:ok, _} <- File.stat(file_path) do
{:ok, filename} {:ok, filename}
else else
@ -568,7 +614,7 @@ defmodule Pleroma.Emoji.Pack do
end end
defp copy_as(remote_pack, local_name) do defp copy_as(remote_pack, local_name) do
path = Path.join(emoji_path(), local_name) path = path_join_name_safe(emoji_path(), local_name)
%__MODULE__{ %__MODULE__{
name: local_name, name: local_name,

View file

@ -6,8 +6,6 @@ defmodule Pleroma.HTML do
# Scrubbers are compiled on boot so they can be configured in OTP releases # Scrubbers are compiled on boot so they can be configured in OTP releases
# @on_load :compile_scrubbers # @on_load :compile_scrubbers
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
def compile_scrubbers do def compile_scrubbers do
dir = Path.join(:code.priv_dir(:pleroma), "scrubbers") dir = Path.join(:code.priv_dir(:pleroma), "scrubbers")
@ -67,22 +65,9 @@ defmodule Pleroma.HTML do
end end
end end
def extract_first_external_url_from_object(%{data: %{"content" => content}} = object) @spec extract_first_external_url_from_object(Pleroma.Object.t()) :: String.t() | nil
def extract_first_external_url_from_object(%{data: %{"content" => content}})
when is_binary(content) do when is_binary(content) do
unless object.data["fake"] do
key = "URL|#{object.id}"
@cachex.fetch!(:scrubber_cache, key, fn _key ->
{:commit, {:ok, extract_first_external_url(content)}}
end)
else
{:ok, extract_first_external_url(content)}
end
end
def extract_first_external_url_from_object(_), do: {:error, :no_content}
def extract_first_external_url(content) do
content content
|> Floki.parse_fragment!() |> Floki.parse_fragment!()
|> Floki.find("a:not(.mention,.hashtag,.attachment,[rel~=\"tag\"])") |> Floki.find("a:not(.mention,.hashtag,.attachment,[rel~=\"tag\"])")
@ -90,4 +75,6 @@ defmodule Pleroma.HTML do
|> Floki.attribute("href") |> Floki.attribute("href")
|> Enum.at(0) |> Enum.at(0)
end end
def extract_first_external_url_from_object(_), do: nil
end end

View file

@ -74,7 +74,12 @@ defmodule Pleroma.HTTP do
request = build_request(method, headers, options, url, body, params) request = build_request(method, headers, options, url, body, params)
client = Tesla.client([Tesla.Middleware.FollowRedirects, Tesla.Middleware.Telemetry]) client = Tesla.client([Tesla.Middleware.FollowRedirects, Tesla.Middleware.Telemetry])
Logger.debug("Outbound: #{method} #{url}")
request(client, request) request(client, request)
rescue
e ->
Logger.error("Failed to fetch #{url}: #{inspect(e)}")
{:error, :fetch_error}
end end
@spec request(Client.t(), keyword()) :: {:ok, Env.t()} | {:error, any()} @spec request(Client.t(), keyword()) :: {:ok, Env.t()} | {:error, any()}

View file

@ -65,6 +65,15 @@ defmodule Pleroma.HTTP.AdapterHelper do
|> put_in([:pools, :default, :size], pool_size) |> put_in([:pools, :default, :size], pool_size)
end end
def ensure_ipv6(opts) do
# Default transport opts already enable IPv6, so just ensure they're loaded
opts
|> maybe_add_pools()
|> maybe_add_default_pool()
|> maybe_add_conn_opts()
|> maybe_add_transport_opts()
end
defp maybe_add_pools(opts) do defp maybe_add_pools(opts) do
if Keyword.has_key?(opts, :pools) do if Keyword.has_key?(opts, :pools) do
opts opts
@ -96,11 +105,29 @@ defmodule Pleroma.HTTP.AdapterHelper do
defp maybe_add_transport_opts(opts) do defp maybe_add_transport_opts(opts) do
transport_opts = get_in(opts, [:pools, :default, :conn_opts, :transport_opts]) transport_opts = get_in(opts, [:pools, :default, :conn_opts, :transport_opts])
unless is_nil(transport_opts) do opts =
opts unless is_nil(transport_opts) do
else opts
put_in(opts, [:pools, :default, :conn_opts, :transport_opts], []) else
end put_in(opts, [:pools, :default, :conn_opts, :transport_opts], [])
end
# IPv6 is disabled and IPv4 enabled by default; ensure we can use both
put_in(opts, [:pools, :default, :conn_opts, :transport_opts, :inet6], true)
end
def add_default_pool_max_idle_time(opts, pool_timeout) do
opts
|> maybe_add_pools()
|> maybe_add_default_pool()
|> put_in([:pools, :default, :pool_max_idle_time], pool_timeout)
end
def add_default_conn_max_idle_time(opts, connection_timeout) do
opts
|> maybe_add_pools()
|> maybe_add_default_pool()
|> put_in([:pools, :default, :conn_max_idle_time], connection_timeout)
end end
@doc """ @doc """

121
lib/pleroma/http/backoff.ex Normal file
View file

@ -0,0 +1,121 @@
defmodule Pleroma.HTTP.Backoff do
alias Pleroma.HTTP
require Logger
@cachex Pleroma.Config.get([:cachex, :provider], Cachex)
@backoff_cache :http_backoff_cache
# attempt to parse a timestamp from a header
# returns nil if it can't parse the timestamp
@spec timestamp_or_nil(binary) :: DateTime.t() | nil
defp timestamp_or_nil(header) do
case DateTime.from_iso8601(header) do
{:ok, stamp, _} ->
stamp
_ ->
nil
end
end
# attempt to parse the x-ratelimit-reset header from the headers
@spec x_ratelimit_reset(headers :: list) :: DateTime.t() | nil
defp x_ratelimit_reset(headers) do
with {_header, value} <- List.keyfind(headers, "x-ratelimit-reset", 0),
true <- is_binary(value) do
timestamp_or_nil(value)
else
_ ->
nil
end
end
# attempt to parse the Retry-After header from the headers
# this can be either a timestamp _or_ a number of seconds to wait!
# we'll return a datetime if we can parse it, or nil if we can't
@spec retry_after(headers :: list) :: DateTime.t() | nil
defp retry_after(headers) do
with {_header, value} <- List.keyfind(headers, "retry-after", 0),
true <- is_binary(value) do
# first, see if it's an integer
case Integer.parse(value) do
{seconds, ""} ->
Logger.debug("Parsed Retry-After header: #{seconds} seconds")
DateTime.utc_now() |> Timex.shift(seconds: seconds)
_ ->
# if it's not an integer, try to parse it as a timestamp
timestamp_or_nil(value)
end
else
_ ->
nil
end
end
# given a set of headers, will attempt to find the next backoff timestamp
# if it can't find one, it will default to 5 minutes from now
@spec next_backoff_timestamp(%{headers: list}) :: DateTime.t()
defp next_backoff_timestamp(%{headers: headers}) when is_list(headers) do
default_5_minute_backoff =
DateTime.utc_now()
|> Timex.shift(seconds: 5 * 60)
backoff =
[&x_ratelimit_reset/1, &retry_after/1]
|> Enum.map(& &1.(headers))
|> Enum.find(&(&1 != nil))
if is_nil(backoff) do
Logger.debug("No backoff headers found, defaulting to 5 minutes from now")
default_5_minute_backoff
else
Logger.debug("Found backoff header, will back off until: #{backoff}")
backoff
end
end
defp next_backoff_timestamp(_), do: DateTime.utc_now() |> Timex.shift(seconds: 5 * 60)
# utility function to check the HTTP response for potential backoff headers
# will check if we get a 429 or 503 response, and if we do, will back off for a bit
@spec check_backoff({:ok | :error, HTTP.Env.t()}, binary()) ::
{:ok | :error, HTTP.Env.t()} | {:error, :ratelimit}
defp check_backoff({:ok, env}, host) do
case env.status do
status when status in [429, 503] ->
Logger.error("Rate limited on #{host}! Backing off...")
timestamp = next_backoff_timestamp(env)
ttl = Timex.diff(timestamp, DateTime.utc_now(), :seconds)
# we will cache the host for 5 minutes
@cachex.put(@backoff_cache, host, true, ttl: ttl)
{:error, :ratelimit}
_ ->
{:ok, env}
end
end
defp check_backoff(env, _), do: env
@doc """
this acts as a single throughput for all GET requests
we will check if the host is in the cache, and if it is, we will automatically fail the request
this ensures that we don't hammer the server with requests, and instead wait for the backoff to expire
this is a very simple implementation, and can be improved upon!
"""
@spec get(binary, list, list) :: {:ok | :error, HTTP.Env.t()} | {:error, :ratelimit}
def get(url, headers \\ [], options \\ []) do
%{host: host} = URI.parse(url)
case @cachex.get(@backoff_cache, host) do
{:ok, nil} ->
url
|> HTTP.get(headers, options)
|> check_backoff(host)
_ ->
{:error, :ratelimit}
end
end
end

View file

@ -15,8 +15,19 @@ defmodule Pleroma.JobQueueMonitor do
@impl true @impl true
def init(state) do def init(state) do
:telemetry.attach("oban-monitor-failure", [:oban, :job, :exception], &handle_event/4, nil) :telemetry.attach(
:telemetry.attach("oban-monitor-success", [:oban, :job, :stop], &handle_event/4, nil) "oban-monitor-failure",
[:oban, :job, :exception],
&Pleroma.JobQueueMonitor.handle_event/4,
nil
)
:telemetry.attach(
"oban-monitor-success",
[:oban, :job, :stop],
&Pleroma.JobQueueMonitor.handle_event/4,
nil
)
{:ok, state} {:ok, state}
end end

View file

@ -178,7 +178,10 @@ defmodule Pleroma.Object do
ap_id ap_id
Keyword.get(options, :fetch) -> Keyword.get(options, :fetch) ->
Fetcher.fetch_object_from_id!(ap_id, options) case Fetcher.fetch_object_from_id(ap_id, options) do
{:ok, object} -> object
_ -> nil
end
true -> true ->
get_cached_by_ap_id(ap_id) get_cached_by_ap_id(ap_id)

View file

@ -11,6 +11,9 @@ defmodule Pleroma.Object.Containment do
Object containment is an important step in validating remote objects to prevent Object containment is an important step in validating remote objects to prevent
spoofing, therefore removal of object containment functions is NOT recommended. spoofing, therefore removal of object containment functions is NOT recommended.
""" """
alias Pleroma.Web.ActivityPub.Transmogrifier
def get_actor(%{"actor" => actor}) when is_binary(actor) do def get_actor(%{"actor" => actor}) when is_binary(actor) do
actor actor
end end
@ -47,6 +50,31 @@ defmodule Pleroma.Object.Containment do
defp compare_uris(%URI{host: host} = _id_uri, %URI{host: host} = _other_uri), do: :ok defp compare_uris(%URI{host: host} = _id_uri, %URI{host: host} = _other_uri), do: :ok
defp compare_uris(_id_uri, _other_uri), do: :error defp compare_uris(_id_uri, _other_uri), do: :error
defp compare_uris_exact(uri, uri), do: :ok
defp compare_uris_exact(%URI{} = id, %URI{} = other),
do: compare_uris_exact(URI.to_string(id), URI.to_string(other))
defp compare_uris_exact(id_uri, other_uri)
when is_binary(id_uri) and is_binary(other_uri) do
norm_id = String.replace_suffix(id_uri, "/", "")
norm_other = String.replace_suffix(other_uri, "/", "")
if norm_id == norm_other, do: :ok, else: :error
end
@doc """
Checks whether an URL to fetch from is from the local server.
We never want to fetch from ourselves; if its not in the database
it cant be authentic and must be a counterfeit.
"""
def contain_local_fetch(id) do
case compare_uris(URI.parse(id), Pleroma.Web.Endpoint.struct_url()) do
:ok -> :error
_ -> :ok
end
end
@doc """ @doc """
Checks that an imported AP object's actor matches the host it came from. Checks that an imported AP object's actor matches the host it came from.
""" """
@ -62,8 +90,31 @@ defmodule Pleroma.Object.Containment do
def contain_origin(id, %{"attributedTo" => actor} = params), def contain_origin(id, %{"attributedTo" => actor} = params),
do: contain_origin(id, Map.put(params, "actor", actor)) do: contain_origin(id, Map.put(params, "actor", actor))
def contain_origin(_id, _data), do: :error def contain_origin(_id, _data), do: :ok
@doc """
Check whether the fetch URL (after redirects) exactly (sans tralining slash) matches either
the canonical ActivityPub id or the objects url field (for display URLs from *key and Mastodon)
Since this is meant to be used for fetches, anonymous or transient objects are not accepted here.
"""
def contain_id_to_fetch(url, %{"id" => id} = data) when is_binary(id) do
with {:id, :error} <- {:id, compare_uris_exact(id, url)},
# "url" can be a "Link" object and this is checked before full normalisation
display_url <- Transmogrifier.fix_url(data)["url"],
true <- display_url != nil do
compare_uris_exact(display_url, url)
else
{:id, :ok} -> :ok
_ -> :error
end
end
def contain_id_to_fetch(_url, _data), do: :error
@doc """
Check whether the object id is from the same host as another id
"""
def contain_origin_from_id(id, %{"id" => other_id} = _params) when is_binary(other_id) do def contain_origin_from_id(id, %{"id" => other_id} = _params) when is_binary(other_id) do
id_uri = URI.parse(id) id_uri = URI.parse(id)
other_uri = URI.parse(other_id) other_uri = URI.parse(other_id)
@ -85,4 +136,12 @@ defmodule Pleroma.Object.Containment do
do: contain_origin(id, object) do: contain_origin(id, object)
def contain_child(_), do: :ok def contain_child(_), do: :ok
@doc "Checks whether two URIs belong to the same domain"
def same_origin(id1, id2) do
uri1 = URI.parse(id1)
uri2 = URI.parse(id2)
compare_uris(uri1, uri2)
end
end end

View file

@ -18,6 +18,16 @@ defmodule Pleroma.Object.Fetcher do
require Logger require Logger
require Pleroma.Constants require Pleroma.Constants
@moduledoc """
This module deals with correctly fetching Acitivity Pub objects in a safe way.
The core function is `fetch_and_contain_remote_object_from_id/1` which performs
the actual fetch and common safety and authenticity checks. Other `fetch_*`
function use the former and perform some additional tasks
"""
@mix_env Mix.env()
defp touch_changeset(changeset) do defp touch_changeset(changeset) do
updated_at = updated_at =
NaiveDateTime.utc_now() NaiveDateTime.utc_now()
@ -103,54 +113,78 @@ defmodule Pleroma.Object.Fetcher do
end end
end end
@doc "Assumes object already is in our database and refetches from remote to update (e.g. for polls)"
def refetch_object(%Object{data: %{"id" => id}} = object) do def refetch_object(%Object{data: %{"id" => id}} = object) do
with {:local, false} <- {:local, Object.local?(object)}, with {:local, false} <- {:local, Object.local?(object)},
{:ok, new_data} <- fetch_and_contain_remote_object_from_id(id), {:ok, new_data} <- fetch_and_contain_remote_object_from_id(id),
{:id, true} <- {:id, new_data["id"] == id},
{:ok, object} <- reinject_object(object, new_data) do {:ok, object} <- reinject_object(object, new_data) do
{:ok, object} {:ok, object}
else else
{:local, true} -> {:ok, object} {:local, true} -> {:ok, object}
{:id, false} -> {:error, :id_mismatch}
e -> {:error, e} e -> {:error, e}
end end
end end
# Note: will create a Create activity, which we need internally at the moment. @doc """
Fetches a new object and puts it through the processing pipeline for inbound objects
Note: will also insert a fake Create activity, since atm we internally
need everything to be traced back to a Create activity.
"""
def fetch_object_from_id(id, options \\ []) do def fetch_object_from_id(id, options \\ []) do
with %URI{} = uri <- URI.parse(id), with %URI{} = uri <- URI.parse(id),
# let's check the URI is even vaguely valid first # let's check the URI is even vaguely valid first
{:scheme, true} <- {:scheme, uri.scheme == "http" or uri.scheme == "https"}, {:valid_uri_scheme, true} <-
{:valid_uri_scheme, uri.scheme == "http" or uri.scheme == "https"},
# If we have instance restrictions, apply them here to prevent fetching from unwanted instances # If we have instance restrictions, apply them here to prevent fetching from unwanted instances
{:ok, nil} <- Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_reject(uri), {:mrf_reject_check, {:ok, nil}} <-
{:ok, _} <- Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_accept(uri), {:mrf_reject_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_reject(uri)},
{:mrf_accept_check, {:ok, _}} <-
{:mrf_accept_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_accept(uri)},
{_, nil} <- {:fetch_object, Object.get_cached_by_ap_id(id)}, {_, nil} <- {:fetch_object, Object.get_cached_by_ap_id(id)},
{_, true} <- {:allowed_depth, Federator.allowed_thread_distance?(options[:depth])}, {_, true} <- {:allowed_depth, Federator.allowed_thread_distance?(options[:depth])},
{_, {:ok, data}} <- {:fetch, fetch_and_contain_remote_object_from_id(id)}, {_, {:ok, data}} <- {:fetch, fetch_and_contain_remote_object_from_id(id)},
{_, nil} <- {:normalize, Object.normalize(data, fetch: false)}, {_, nil} <- {:normalize, Object.normalize(data, fetch: false)},
params <- prepare_activity_params(data), params <- prepare_activity_params(data),
{_, :ok} <- {:containment, Containment.contain_origin(id, params)},
{_, {:ok, activity}} <- {_, {:ok, activity}} <-
{:transmogrifier, Transmogrifier.handle_incoming(params, options)}, {:transmogrifier, Transmogrifier.handle_incoming(params, options)},
{_, _data, %Object{} = object} <- {_, _data, %Object{} = object} <-
{:object, data, Object.normalize(activity, fetch: false)} do {:object, data, Object.normalize(activity, fetch: false)} do
{:ok, object} {:ok, object}
else else
{:allowed_depth, false} -> {:allowed_depth, false} = e ->
{:error, "Max thread distance exceeded."} log_fetch_error(id, e)
{:error, :allowed_depth}
{:scheme, false} -> {:valid_uri_scheme, _} = e ->
{:error, "URI Scheme Invalid"} log_fetch_error(id, e)
{:error, :invalid_uri_scheme}
{:containment, _} -> {:mrf_reject_check, _} = e ->
{:error, "Object containment failed."} log_fetch_error(id, e)
{:reject, :mrf}
{:transmogrifier, {:error, {:reject, e}}} -> {:mrf_accept_check, _} = e ->
{:reject, e} log_fetch_error(id, e)
{:reject, :mrf}
{:transmogrifier, {:reject, e}} -> {:containment, reason} = e ->
{:reject, e} log_fetch_error(id, e)
{:error, reason}
{:transmogrifier, _} = e -> {:transmogrifier, {:error, {:reject, reason}}} = e ->
{:error, e} log_fetch_error(id, e)
{:reject, reason}
{:transmogrifier, {:reject, reason}} = e ->
log_fetch_error(id, e)
{:reject, reason}
{:transmogrifier, reason} = e ->
log_fetch_error(id, e)
{:error, reason}
{:object, data, nil} -> {:object, data, nil} ->
reinject_object(%Object{}, data) reinject_object(%Object{}, data)
@ -161,17 +195,21 @@ defmodule Pleroma.Object.Fetcher do
{:fetch_object, %Object{} = object} -> {:fetch_object, %Object{} = object} ->
{:ok, object} {:ok, object}
{:fetch, {:error, error}} -> {:fetch, {:error, reason}} = e ->
{:error, error} log_fetch_error(id, e)
{:error, reason}
{:reject, reason} ->
{:reject, reason}
e -> e ->
e log_fetch_error(id, e)
{:error, e}
end end
end end
defp log_fetch_error(id, error) do
Logger.metadata(object: id)
Logger.error("Object rejected while fetching #{id} #{inspect(error)}")
end
defp prepare_activity_params(data) do defp prepare_activity_params(data) do
%{ %{
"type" => "Create", "type" => "Create",
@ -185,26 +223,6 @@ defmodule Pleroma.Object.Fetcher do
|> Maps.put_if_present("bcc", data["bcc"]) |> Maps.put_if_present("bcc", data["bcc"])
end end
def fetch_object_from_id!(id, options \\ []) do
with {:ok, object} <- fetch_object_from_id(id, options) do
object
else
{:error, %Tesla.Mock.Error{}} ->
nil
{:error, {"Object has been deleted", _id, _code}} ->
nil
{:reject, reason} ->
Logger.debug("Rejected #{id} while fetching: #{inspect(reason)}")
nil
e ->
Logger.error("Error while fetching #{id}: #{inspect(e)}")
nil
end
end
defp make_signature(id, date) do defp make_signature(id, date) do
uri = URI.parse(id) uri = URI.parse(id)
@ -235,6 +253,7 @@ defmodule Pleroma.Object.Fetcher do
end end
end end
@doc "Fetches arbitrary remote object and performs basic safety and authenticity checks"
def fetch_and_contain_remote_object_from_id(id) def fetch_and_contain_remote_object_from_id(id)
def fetch_and_contain_remote_object_from_id(%{"id" => id}), def fetch_and_contain_remote_object_from_id(%{"id" => id}),
@ -243,18 +262,46 @@ defmodule Pleroma.Object.Fetcher do
def fetch_and_contain_remote_object_from_id(id) when is_binary(id) do def fetch_and_contain_remote_object_from_id(id) when is_binary(id) do
Logger.debug("Fetching object #{id} via AP") Logger.debug("Fetching object #{id} via AP")
with {:scheme, true} <- {:scheme, String.starts_with?(id, "http")}, with {:valid_uri_scheme, true} <- {:valid_uri_scheme, String.starts_with?(id, "http")},
{:ok, body} <- get_object(id), %URI{} = uri <- URI.parse(id),
{:mrf_reject_check, {:ok, nil}} <-
{:mrf_reject_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_reject(uri)},
{:mrf_accept_check, {:ok, _}} <-
{:mrf_accept_check, Pleroma.Web.ActivityPub.MRF.SimplePolicy.check_accept(uri)},
{:local_fetch, :ok} <- {:local_fetch, Containment.contain_local_fetch(id)},
{:ok, final_id, body} <- get_object(id),
{:ok, data} <- safe_json_decode(body), {:ok, data} <- safe_json_decode(body),
:ok <- Containment.contain_origin_from_id(id, data) do {_, :ok} <- {:strict_id, Containment.contain_id_to_fetch(final_id, data)},
unless Instances.reachable?(id) do {_, :ok} <- {:containment, Containment.contain_origin(final_id, data)} do
Instances.set_reachable(id) unless Instances.reachable?(final_id) do
Instances.set_reachable(final_id)
end end
{:ok, data} {:ok, data}
else else
{:scheme, _} -> {:strict_id, _} = e ->
{:error, "Unsupported URI scheme"} log_fetch_error(id, e)
{:error, :id_mismatch}
{:mrf_reject_check, _} = e ->
log_fetch_error(id, e)
{:reject, :mrf}
{:mrf_accept_check, _} = e ->
log_fetch_error(id, e)
{:reject, :mrf}
{:valid_uri_scheme, _} = e ->
log_fetch_error(id, e)
{:error, :invalid_uri_scheme}
{:local_fetch, _} = e ->
log_fetch_error(id, e)
{:error, :local_resource}
{:containment, reason} ->
log_fetch_error(id, reason)
{:error, reason}
{:error, e} -> {:error, e} ->
{:error, e} {:error, e}
@ -265,47 +312,85 @@ defmodule Pleroma.Object.Fetcher do
end end
def fetch_and_contain_remote_object_from_id(_id), def fetch_and_contain_remote_object_from_id(_id),
do: {:error, "id must be a string"} do: {:error, :invalid_id}
defp check_crossdomain_redirect(final_host, original_url)
# HOPEFULLY TEMPORARY
# Basically none of our Tesla mocks in tests set the (supposed to
# exist for Tesla proper) url parameter for their responses
# causing almost every fetch in test to fail otherwise
if @mix_env == :test do
defp check_crossdomain_redirect(nil, _) do
{:cross_domain_redirect, false}
end
end
defp check_crossdomain_redirect(final_host, original_url) do
{:cross_domain_redirect, final_host != URI.parse(original_url).host}
end
if @mix_env == :test do
defp get_final_id(nil, initial_url), do: initial_url
defp get_final_id("", initial_url), do: initial_url
end
defp get_final_id(final_url, _intial_url) do
final_url
end
@doc "Do NOT use; only public for use in tests"
def get_object(id) do def get_object(id) do
date = Pleroma.Signature.signed_date() date = Pleroma.Signature.signed_date()
headers = headers =
[{"accept", "application/activity+json"}] [
# The first is required by spec, the second provided as a fallback for buggy implementations
{"accept", "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\""},
{"accept", "application/activity+json"}
]
|> maybe_date_fetch(date) |> maybe_date_fetch(date)
|> sign_fetch(id, date) |> sign_fetch(id, date)
case HTTP.get(id, headers) do with {:ok, %{body: body, status: code, headers: headers, url: final_url}}
{:ok, %{body: body, status: code, headers: headers}} when code in 200..299 -> when code in 200..299 <-
case List.keyfind(headers, "content-type", 0) do HTTP.Backoff.get(id, headers),
{_, content_type} -> remote_host <-
case Plug.Conn.Utils.media_type(content_type) do URI.parse(final_url).host,
{:ok, "application", "activity+json", _} -> {:cross_domain_redirect, false} <-
{:ok, body} check_crossdomain_redirect(remote_host, id),
{:has_content_type, {_, content_type}} <-
{:has_content_type, List.keyfind(headers, "content-type", 0)},
{:parse_content_type, {:ok, "application", subtype, type_params}} <-
{:parse_content_type, Plug.Conn.Utils.media_type(content_type)} do
final_id = get_final_id(final_url, id)
{:ok, "application", "ld+json", case {subtype, type_params} do
%{"profile" => "https://www.w3.org/ns/activitystreams"}} -> {"activity+json", _} ->
{:ok, body} {:ok, final_id, body}
# pixelfed sometimes (and only sometimes) responds with http instead of https {"ld+json", %{"profile" => "https://www.w3.org/ns/activitystreams"}} ->
{:ok, "application", "ld+json", {:ok, final_id, body}
%{"profile" => "http://www.w3.org/ns/activitystreams"}} ->
{:ok, body}
_ -> _ ->
{:error, {:content_type, content_type}} {:error, {:content_type, content_type}}
end end
else
_ -> {:ok, %{status: code}} when code in [401, 403] ->
{:error, {:content_type, nil}} {:error, :forbidden}
end
{:ok, %{status: code}} when code in [404, 410] -> {:ok, %{status: code}} when code in [404, 410] ->
{:error, {"Object has been deleted", id, code}} {:error, :not_found}
{:error, e} -> {:error, e} ->
{:error, e} {:error, e}
{:has_content_type, _} ->
{:error, {:content_type, nil}}
{:parse_content_type, e} ->
{:error, {:content_type, e}}
e -> e ->
{:error, e} {:error, e}
end end

View file

@ -17,6 +17,8 @@ defmodule Pleroma.ReverseProxy do
@failed_request_ttl :timer.seconds(60) @failed_request_ttl :timer.seconds(60)
@methods ~w(GET HEAD) @methods ~w(GET HEAD)
@allowed_mime_types Pleroma.Config.get([Pleroma.Upload, :allowed_mime_types], [])
@cachex Pleroma.Config.get([:cachex, :provider], Cachex) @cachex Pleroma.Config.get([:cachex, :provider], Cachex)
def max_read_duration_default, do: @max_read_duration def max_read_duration_default, do: @max_read_duration
@ -61,11 +63,14 @@ defmodule Pleroma.ReverseProxy do
""" """
@inline_content_types [ @inline_content_types [
"image/avif",
"image/gif", "image/gif",
"image/jpeg", "image/jpeg",
"image/jpg", "image/jpg",
"image/jxl",
"image/png", "image/png",
"image/svg+xml", "image/svg+xml",
"image/webp",
"audio/mpeg", "audio/mpeg",
"audio/mp3", "audio/mp3",
"video/webm", "video/webm",
@ -250,6 +255,7 @@ defmodule Pleroma.ReverseProxy do
headers headers
|> Enum.filter(fn {k, _} -> k in @keep_resp_headers end) |> Enum.filter(fn {k, _} -> k in @keep_resp_headers end)
|> build_resp_cache_headers(opts) |> build_resp_cache_headers(opts)
|> sanitise_content_type()
|> build_resp_content_disposition_header(opts) |> build_resp_content_disposition_header(opts)
|> build_csp_headers() |> build_csp_headers()
|> Keyword.merge(Keyword.get(opts, :resp_headers, [])) |> Keyword.merge(Keyword.get(opts, :resp_headers, []))
@ -279,6 +285,21 @@ defmodule Pleroma.ReverseProxy do
end end
end end
defp sanitise_content_type(headers) do
original_ct = get_content_type(headers)
safe_ct =
Pleroma.Web.Plugs.Utils.get_safe_mime_type(
%{allowed_mime_types: @allowed_mime_types},
original_ct
)
[
{"content-type", safe_ct}
| Enum.filter(headers, fn {k, _v} -> k != "content-type" end)
]
end
defp build_resp_content_disposition_header(headers, opts) do defp build_resp_content_disposition_header(headers, opts) do
opt = Keyword.get(opts, :inline_content_types, @inline_content_types) opt = Keyword.get(opts, :inline_content_types, @inline_content_types)

View file

@ -28,7 +28,7 @@ defmodule Pleroma.ScheduledActivity do
timestamps() timestamps()
end end
def changeset(%ScheduledActivity{} = scheduled_activity, attrs) do defp changeset(%ScheduledActivity{} = scheduled_activity, attrs) do
scheduled_activity scheduled_activity
|> cast(attrs, [:scheduled_at, :params]) |> cast(attrs, [:scheduled_at, :params])
|> validate_required([:scheduled_at, :params]) |> validate_required([:scheduled_at, :params])
@ -40,26 +40,36 @@ defmodule Pleroma.ScheduledActivity do
%{changes: %{params: %{"media_ids" => media_ids} = params}} = changeset %{changes: %{params: %{"media_ids" => media_ids} = params}} = changeset
) )
when is_list(media_ids) do when is_list(media_ids) do
media_attachments = Utils.attachments_from_ids(%{media_ids: media_ids}) user = User.get_by_id(changeset.data.user_id)
params = case Utils.attachments_from_ids(user, %{media_ids: media_ids}) do
params media_attachments when is_list(media_attachments) ->
|> Map.put("media_attachments", media_attachments) params =
|> Map.put("media_ids", media_ids) params
|> Map.put("media_attachments", media_attachments)
|> Map.put("media_ids", media_ids)
put_change(changeset, :params, params) put_change(changeset, :params, params)
{:error, _} = e ->
e
e ->
{:error, e}
end
end end
defp with_media_attachments(changeset), do: changeset defp with_media_attachments(changeset), do: changeset
def update_changeset(%ScheduledActivity{} = scheduled_activity, attrs) do defp update_changeset(%ScheduledActivity{} = scheduled_activity, attrs) do
# note: should this ever allow swapping media attachments, make sure ownership is checked
scheduled_activity scheduled_activity
|> cast(attrs, [:scheduled_at]) |> cast(attrs, [:scheduled_at])
|> validate_required([:scheduled_at]) |> validate_required([:scheduled_at])
|> validate_scheduled_at() |> validate_scheduled_at()
end end
def validate_scheduled_at(changeset) do defp validate_scheduled_at(changeset) do
validate_change(changeset, :scheduled_at, fn _, scheduled_at -> validate_change(changeset, :scheduled_at, fn _, scheduled_at ->
cond do cond do
not far_enough?(scheduled_at) -> not far_enough?(scheduled_at) ->
@ -77,7 +87,7 @@ defmodule Pleroma.ScheduledActivity do
end) end)
end end
def exceeds_daily_user_limit?(user_id, scheduled_at) do defp exceeds_daily_user_limit?(user_id, scheduled_at) do
ScheduledActivity ScheduledActivity
|> where(user_id: ^user_id) |> where(user_id: ^user_id)
|> where([sa], type(sa.scheduled_at, :date) == type(^scheduled_at, :date)) |> where([sa], type(sa.scheduled_at, :date) == type(^scheduled_at, :date))
@ -86,7 +96,7 @@ defmodule Pleroma.ScheduledActivity do
|> Kernel.>=(Config.get([ScheduledActivity, :daily_user_limit])) |> Kernel.>=(Config.get([ScheduledActivity, :daily_user_limit]))
end end
def exceeds_total_user_limit?(user_id) do defp exceeds_total_user_limit?(user_id) do
ScheduledActivity ScheduledActivity
|> where(user_id: ^user_id) |> where(user_id: ^user_id)
|> select([sa], count(sa.id)) |> select([sa], count(sa.id))
@ -108,20 +118,29 @@ defmodule Pleroma.ScheduledActivity do
diff > @min_offset diff > @min_offset
end end
def new(%User{} = user, attrs) do defp new(%User{} = user, attrs) do
changeset(%ScheduledActivity{user_id: user.id}, attrs) changeset(%ScheduledActivity{user_id: user.id}, attrs)
end end
@doc """ @doc """
Creates ScheduledActivity and add to queue to perform at scheduled_at date Creates ScheduledActivity and add to queue to perform at scheduled_at date
""" """
@spec create(User.t(), map()) :: {:ok, ScheduledActivity.t()} | {:error, Ecto.Changeset.t()} @spec create(User.t(), map()) :: {:ok, ScheduledActivity.t()} | {:error, any()}
def create(%User{} = user, attrs) do def create(%User{} = user, attrs) do
Multi.new() case new(user, attrs) do
|> Multi.insert(:scheduled_activity, new(user, attrs)) %Ecto.Changeset{} = sched_data ->
|> maybe_add_jobs(Config.get([ScheduledActivity, :enabled])) Multi.new()
|> Repo.transaction() |> Multi.insert(:scheduled_activity, sched_data)
|> transaction_response |> maybe_add_jobs(Config.get([ScheduledActivity, :enabled]))
|> Repo.transaction()
|> transaction_response
{:error, _} = e ->
e
e ->
{:error, e}
end
end end
defp maybe_add_jobs(multi, true) do defp maybe_add_jobs(multi, true) do
@ -187,17 +206,7 @@ defmodule Pleroma.ScheduledActivity do
|> where(user_id: ^user.id) |> where(user_id: ^user.id)
end end
def due_activities(offset \\ 0) do defp job_query(scheduled_activity_id) do
naive_datetime =
NaiveDateTime.utc_now()
|> NaiveDateTime.add(offset, :millisecond)
ScheduledActivity
|> where([sa], sa.scheduled_at < ^naive_datetime)
|> Repo.all()
end
def job_query(scheduled_activity_id) do
from(j in Oban.Job, from(j in Oban.Job,
where: j.queue == "scheduled_activities", where: j.queue == "scheduled_activities",
where: fragment("args ->> 'activity_id' = ?::text", ^to_string(scheduled_activity_id)) where: fragment("args ->> 'activity_id' = ?::text", ^to_string(scheduled_activity_id))

View file

@ -21,19 +21,12 @@ defmodule Pleroma.Search.DatabaseSearch do
offset = Keyword.get(options, :offset, 0) offset = Keyword.get(options, :offset, 0)
author = Keyword.get(options, :author) author = Keyword.get(options, :author)
search_function =
if :persistent_term.get({Pleroma.Repo, :postgres_version}) >= 11 do
:websearch
else
:plain
end
try do try do
Activity Activity
|> Activity.with_preloaded_object() |> Activity.with_preloaded_object()
|> Activity.restrict_deactivated_users() |> Activity.restrict_deactivated_users()
|> restrict_public() |> restrict_public()
|> query_with(index_type, search_query, search_function) |> query_with(index_type, search_query)
|> maybe_restrict_local(user) |> maybe_restrict_local(user)
|> maybe_restrict_author(author) |> maybe_restrict_author(author)
|> maybe_restrict_blocked(user) |> maybe_restrict_blocked(user)
@ -72,25 +65,7 @@ defmodule Pleroma.Search.DatabaseSearch do
) )
end end
defp query_with(q, :gin, search_query, :plain) do defp query_with(q, :gin, search_query) do
%{rows: [[tsc]]} =
Ecto.Adapters.SQL.query!(
Pleroma.Repo,
"select current_setting('default_text_search_config')::regconfig::oid;"
)
from([a, o] in q,
where:
fragment(
"to_tsvector(?::oid::regconfig, ?->>'content') @@ plainto_tsquery(?)",
^tsc,
o.data,
^search_query
)
)
end
defp query_with(q, :gin, search_query, :websearch) do
%{rows: [[tsc]]} = %{rows: [[tsc]]} =
Ecto.Adapters.SQL.query!( Ecto.Adapters.SQL.query!(
Pleroma.Repo, Pleroma.Repo,
@ -108,19 +83,7 @@ defmodule Pleroma.Search.DatabaseSearch do
) )
end end
defp query_with(q, :rum, search_query, :plain) do defp query_with(q, :rum, search_query) do
from([a, o] in q,
where:
fragment(
"? @@ plainto_tsquery(?)",
o.fts_content,
^search_query
),
order_by: [fragment("? <=> now()::date", o.inserted_at)]
)
end
defp query_with(q, :rum, search_query, :websearch) do
from([a, o] in q, from([a, o] in q,
where: where:
fragment( fragment(

View file

@ -5,15 +5,27 @@ defmodule Pleroma.Search.Meilisearch do
alias Pleroma.Activity alias Pleroma.Activity
import Pleroma.Search.DatabaseSearch import Pleroma.Search.DatabaseSearch
import Ecto.Query
@behaviour Pleroma.Search.SearchBackend @behaviour Pleroma.Search.SearchBackend
defp meili_headers do defp meili_headers(key) do
private_key = Pleroma.Config.get([Pleroma.Search.Meilisearch, :private_key]) key_header =
if is_nil(key), do: [], else: [{"Authorization", "Bearer #{key}"}]
[{"Content-Type", "application/json"}] ++ [{"Content-Type", "application/json"} | key_header]
if is_nil(private_key), do: [], else: [{"Authorization", "Bearer #{private_key}"}] end
defp meili_headers_admin do
private_key = Pleroma.Config.get([Pleroma.Search.Meilisearch, :private_key])
meili_headers(private_key)
end
defp meili_headers_search do
search_key =
Pleroma.Config.get([Pleroma.Search.Meilisearch, :search_key]) ||
Pleroma.Config.get([Pleroma.Search.Meilisearch, :private_key])
meili_headers(search_key)
end end
def meili_get(path) do def meili_get(path) do
@ -22,7 +34,7 @@ defmodule Pleroma.Search.Meilisearch do
result = result =
Pleroma.HTTP.get( Pleroma.HTTP.get(
Path.join(endpoint, path), Path.join(endpoint, path),
meili_headers() meili_headers_admin()
) )
with {:ok, res} <- result do with {:ok, res} <- result do
@ -30,14 +42,14 @@ defmodule Pleroma.Search.Meilisearch do
end end
end end
def meili_post(path, params) do defp meili_search(params) do
endpoint = Pleroma.Config.get([Pleroma.Search.Meilisearch, :url]) endpoint = Pleroma.Config.get([Pleroma.Search.Meilisearch, :url])
result = result =
Pleroma.HTTP.post( Pleroma.HTTP.post(
Path.join(endpoint, path), Path.join(endpoint, "/indexes/objects/search"),
Jason.encode!(params), Jason.encode!(params),
meili_headers() meili_headers_search()
) )
with {:ok, res} <- result do with {:ok, res} <- result do
@ -53,7 +65,7 @@ defmodule Pleroma.Search.Meilisearch do
:put, :put,
Path.join(endpoint, path), Path.join(endpoint, path),
Jason.encode!(params), Jason.encode!(params),
meili_headers(), meili_headers_admin(),
[] []
) )
@ -70,7 +82,7 @@ defmodule Pleroma.Search.Meilisearch do
:delete, :delete,
Path.join(endpoint, path), Path.join(endpoint, path),
"", "",
meili_headers(), meili_headers_admin(),
[] []
) )
end end
@ -81,25 +93,20 @@ defmodule Pleroma.Search.Meilisearch do
author = Keyword.get(options, :author) author = Keyword.get(options, :author)
res = res =
meili_post( meili_search(%{q: query, offset: offset, limit: limit})
"/indexes/objects/search",
%{q: query, offset: offset, limit: limit}
)
with {:ok, result} <- res do with {:ok, result} <- res do
hits = result["hits"] |> Enum.map(& &1["ap"]) hits = result["hits"] |> Enum.map(& &1["ap"])
try do try do
hits hits
|> Activity.create_by_object_ap_id() |> Activity.get_presorted_create_by_object_ap_id()
|> Activity.with_preloaded_object()
|> Activity.with_preloaded_object() |> Activity.with_preloaded_object()
|> Activity.restrict_deactivated_users() |> Activity.restrict_deactivated_users()
|> maybe_restrict_local(user) |> maybe_restrict_local(user)
|> maybe_restrict_author(author) |> maybe_restrict_author(author)
|> maybe_restrict_blocked(user) |> maybe_restrict_blocked(user)
|> maybe_fetch(user, query) |> maybe_fetch(user, query)
|> order_by([object: obj], desc: obj.data["published"])
|> Pleroma.Repo.all() |> Pleroma.Repo.all()
rescue rescue
_ -> maybe_fetch([], user, query) _ -> maybe_fetch([], user, query)

View file

@ -10,7 +10,7 @@ defmodule Pleroma.Signature do
alias Pleroma.User alias Pleroma.User
alias Pleroma.Web.ActivityPub.ActivityPub alias Pleroma.Web.ActivityPub.ActivityPub
@known_suffixes ["/publickey", "/main-key"] @known_suffixes ["/publickey", "/main-key", "#key"]
def key_id_to_actor_id(key_id) do def key_id_to_actor_id(key_id) do
uri = uri =

View file

@ -13,7 +13,6 @@ defmodule Pleroma.Upload do
* `:uploader`: override uploader * `:uploader`: override uploader
* `:filters`: override filters * `:filters`: override filters
* `:size_limit`: override size limit * `:size_limit`: override size limit
* `:activity_type`: override activity type
The `%Pleroma.Upload{}` struct: all documented fields are meant to be overwritten in filters: The `%Pleroma.Upload{}` struct: all documented fields are meant to be overwritten in filters:
@ -48,7 +47,6 @@ defmodule Pleroma.Upload do
@type option :: @type option ::
{:type, :avatar | :banner | :background} {:type, :avatar | :banner | :background}
| {:description, String.t()} | {:description, String.t()}
| {:activity_type, String.t()}
| {:size_limit, nil | non_neg_integer()} | {:size_limit, nil | non_neg_integer()}
| {:uploader, module()} | {:uploader, module()}
| {:filters, [module()]} | {:filters, [module()]}
@ -61,12 +59,23 @@ defmodule Pleroma.Upload do
width: integer(), width: integer(),
height: integer(), height: integer(),
blurhash: String.t(), blurhash: String.t(),
description: String.t(),
path: String.t() path: String.t()
} }
@always_enabled_filters [Pleroma.Upload.Filter.AnonymizeFilename] @always_enabled_filters [Pleroma.Upload.Filter.Dedupe]
defstruct [:id, :name, :tempfile, :content_type, :width, :height, :blurhash, :path] defstruct [
:id,
:name,
:tempfile,
:content_type,
:width,
:height,
:blurhash,
:description,
:path
]
@spec store(source, options :: [option()]) :: {:ok, Map.t()} | {:error, any()} @spec store(source, options :: [option()]) :: {:ok, Map.t()} | {:error, any()}
@doc "Store a file. If using a `Plug.Upload{}` as the source, be sure to use `Majic.Plug` to ensure its content_type and filename is correct." @doc "Store a file. If using a `Plug.Upload{}` as the source, be sure to use `Majic.Plug` to ensure its content_type and filename is correct."
@ -76,7 +85,7 @@ defmodule Pleroma.Upload do
with {:ok, upload} <- prepare_upload(upload, opts), with {:ok, upload} <- prepare_upload(upload, opts),
upload = %__MODULE__{upload | path: upload.path || "#{upload.id}/#{upload.name}"}, upload = %__MODULE__{upload | path: upload.path || "#{upload.id}/#{upload.name}"},
{:ok, upload} <- Pleroma.Upload.Filter.filter(opts.filters, upload), {:ok, upload} <- Pleroma.Upload.Filter.filter(opts.filters, upload),
description = Map.get(opts, :description) || "", description = Map.get(upload, :description) || "",
{_, true} <- {_, true} <-
{:description_limit, {:description_limit,
String.length(description) <= Pleroma.Config.get([:instance, :description_limit])}, String.length(description) <= Pleroma.Config.get([:instance, :description_limit])},
@ -132,7 +141,7 @@ defmodule Pleroma.Upload do
end end
%{ %{
activity_type: Keyword.get(opts, :activity_type, activity_type), activity_type: activity_type,
size_limit: Keyword.get(opts, :size_limit, size_limit), size_limit: Keyword.get(opts, :size_limit, size_limit),
uploader: Keyword.get(opts, :uploader, Pleroma.Config.get([__MODULE__, :uploader])), uploader: Keyword.get(opts, :uploader, Pleroma.Config.get([__MODULE__, :uploader])),
filters: filters:
@ -152,7 +161,8 @@ defmodule Pleroma.Upload do
id: UUID.generate(), id: UUID.generate(),
name: file.filename, name: file.filename,
tempfile: file.path, tempfile: file.path,
content_type: file.content_type content_type: file.content_type,
description: opts.description
}} }}
end end
end end
@ -172,7 +182,8 @@ defmodule Pleroma.Upload do
id: UUID.generate(), id: UUID.generate(),
name: hash <> "." <> ext, name: hash <> "." <> ext,
tempfile: tmp_path, tempfile: tmp_path,
content_type: content_type content_type: content_type,
description: opts.description
}} }}
end end
end end
@ -235,7 +246,7 @@ defmodule Pleroma.Upload do
case uploader do case uploader do
Pleroma.Uploaders.Local -> Pleroma.Uploaders.Local ->
upload_base_url || Pleroma.Web.Endpoint.url() <> "/media/" upload_base_url
Pleroma.Uploaders.S3 -> Pleroma.Uploaders.S3 ->
bucket = Config.get([Pleroma.Uploaders.S3, :bucket]) bucket = Config.get([Pleroma.Uploaders.S3, :bucket])
@ -261,7 +272,7 @@ defmodule Pleroma.Upload do
end end
_ -> _ ->
public_endpoint || upload_base_url || Pleroma.Web.Endpoint.url() <> "/media/" public_endpoint || upload_base_url
end end
end end
end end

View file

@ -0,0 +1,51 @@
# Pleroma: A lightweight social networking server
# Copyright © 2017-2021 Pleroma Authors <https://pleroma.social/>
# SPDX-License-Identifier: AGPL-3.0-only
defmodule Pleroma.Upload.Filter.Exiftool.ReadDescription do
@moduledoc """
Gets a valid description from the related EXIF tags and provides them in the response if no description is provided yet.
It will first check ImageDescription, when that doesn't probide a valid description, it will check iptc:Caption-Abstract.
A valid description means the fields are filled in and not too long (see `:instance, :description_limit`).
"""
@behaviour Pleroma.Upload.Filter
@spec filter(Pleroma.Upload.t()) :: {:ok, any()} | {:error, String.t()}
def filter(%Pleroma.Upload{description: description})
when is_binary(description),
do: {:ok, :noop}
def filter(%Pleroma.Upload{tempfile: file} = upload),
do: {:ok, :filtered, upload |> Map.put(:description, read_description_from_exif_data(file))}
def filter(_, _), do: {:ok, :noop}
defp read_description_from_exif_data(file) do
nil
|> read_when_empty(file, "-ImageDescription")
|> read_when_empty(file, "-iptc:Caption-Abstract")
end
defp read_when_empty(current_description, _, _) when is_binary(current_description),
do: current_description
defp read_when_empty(_, file, tag) do
try do
{tag_content, 0} =
System.cmd("exiftool", ["-b", "-s3", "-ignoreMinorErrors", "-q", "-q", tag, file],
parallelism: true
)
tag_content = String.trim(tag_content)
if tag_content != "" and
String.length(tag_content) <=
Pleroma.Config.get([:instance, :description_limit]),
do: tag_content,
else: nil
rescue
_ in ErlangError -> nil
end
end
end

Some files were not shown because too many files have changed in this diff Show more