This brings it in line with its name and closes an,
in practice harmless, verification hole.
This was/is the only user of contain_origin making it
safe to change the behaviour on actor-less objects.
Until now refetched objects did not ensure the new actor matches the
domain of the object. We refetch polls occasionally to retrieve
up-to-date vote counts. A malicious AP server could have switched out
the poll after initial posting with a completely different post
attribute to an actor from another server.
While we indeed fell for this spoof before the commit,
it fortunately seems to have had no ill effect in practice,
since the asociated Create activity is not changed. When exposing the
actor via our REST API, we read this info from the activity not the
object.
This at first thought still keeps one avenue for exploit open though:
the updated actor can be from our own domain and a third server be
instructed to fetch the object from us. However this is foiled by an
id mismatch. By necessity of being fetchable and our longstanding
same-domain check, the id must still be from the attacker’s server.
Even the most barebone authenticity check is able to sus this out.
Such redirects on AP queries seem most likely to be a spoofing attempt.
If the object is legit, the id should match the final domain anyway and
users can directly use the canonical URL.
The lack of such a check (and use of the initially queried domain’s
authority instead of the final domain) was enabling the current exploit
to even affect instances which already migrated away from a same-domain
upload/proxy setup in the past, but retained a redirect to not break old
attachments.
(In theory this redirect could, with some effort, have been limited to
only old files, but common guides employed a catch-all redirect, which
allows even future uploads to be reachable via an initial query to the
main domain)
Same-domain redirects are valid and also used by ourselves,
e.g. for redirecting /notice/XXX to /objects/YYY.
Turns out we already had a test for activities spoofed via upload due
to an exploit several years. Back then *oma did not verify content-type
at all and doing so was the only adopted countermeasure.
Even the added test sample though suffered from a mismatching id, yet
nobody seems to have thought it a good idea to tighten id checks, huh
Since we will add stricter id checks later, make id and URL match
and also add a testcase for no content type at all. The new section
will be expanded in subsequent commits.
No new path traversal attacks are known. But given the many entrypoints
and code flow complexity inside pack.ex, it unfortunately seems
possible a future refactor or addition might reintroduce one.
Furthermore, some old packs might still contain traversing path entries
which could trigger undesireable actions on rename or delete.
To ensure this can never happen, assert safety during path construction.
Path.safe_relative was introduced in Elixir 1.14, but
fortunately, we already require at least 1.14 anyway.
To save on bandwith and avoid OOMs with large files.
Ofc, this relies on the remote server
(a) sending a content-length header and
(b) being honest about the size.
Common fedi servers seem to provide the header and (b) at least raises
the required privilege of an malicious actor to a server infrastructure
admin of an explicitly allowed host.
A more complete defense which still works when faced with
a malicious server requires changes in upstream Finch;
see https://github.com/sneako/finch/issues/224
E.g. *key’s emoji URLs typically don’t have file extensions, but
until now we just slapped ".png" at its end hoping for the best.
Furthermore, this gives us a chance to actually reject non-images,
which before was not feasible exatly due to those extension-less URLs
Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.
Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.
Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.
Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.
Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.
While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.
Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.
Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.
Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.
Ref.: 4e64397635
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
Implements the preferences endpoint in the Mastodon API, but returns
default values for most of the preferences right now. The only supported
preference we can access is default post visibility, and a relevant test
is added as well.
OTP builds to 1.15
Changelog entry
Ensure policies are fully loaded
Fix :warn
use main branch for linkify
Fix warn in tests
Migrations for phoenix 1.17
Revert "Migrations for phoenix 1.17"
This reverts commit 6a3b2f15b7.
Oban upgrade
Add default empty whitelist
mix format
limit test to amd64
OTP 26 tests for 1.15
use OTP_VERSION tag
baka
just 1.15
Massive deps update
Update locale, deps
Mix format
shell????
multiline???
?
max cases 1
use assert_recieve
don't put_env in async tests
don't async conn/fs tests
mix format
FIx some uploader issues
Fix tests
Set it to `inline` because the vast majority of what's sent is multimedia
content while `attachment` would have the side-effect of triggering a
download dialog.
Closes: https://git.pleroma.social/pleroma/pleroma/-/issues/3114
TwitterCard meta tags are supposed to use the attributes "name" and "content".
OpenGraph tags use the attributes "property" and "content".
Twitter itself is smart enough to detect broken meta tags and discover the TwitterCard
using "property" and "content", but other platforms that only implement parsing of TwitterCards
and not OpenGraph may fail to correctly detect the tags as they're under the wrong attributes.
> "Open Graph protocol also specifies the use of property and content attributes for markup while
> Twitter cards use name and content. Twitter’s parser will fall back to using property and content,
> so there is no need to modify existing Open Graph protocol markup if it already exists." [0]
[0] https://developer.twitter.com/en/docs/twitter-for-websites/cards/guides/getting-started
`context` fields for objects and activities can now be generated based
on the object/activity `inReplyTo` field or its ActivityPub ID, as a
fallback method in cases where `context` fields are missing for incoming
activities and objects.
When doing prune_objects, it's possible that bookmarked objects are deleted.
This gave problems when fetching the bookmark TL.
Here we clean up the bookmarks during pruning in the case were it's possible that bookmarked objects are deleted.
weird values in href will cause base64 encoding to fail later down the
line, so let's make sure the value we're passing on is somewhat sane, or
at the very least a binary
this fixes#482
When no image description is filled in, Pleroma allowed fallbacks.
Those were (based on a setting) either the filename, or a fixed description.
Neither are good options for image descriptions imo, so here we remove this.
Note that there's two tests removed who supposedly tested something else.
But examining closer, they didn't seem to test what they claimed to test,
so I removed them rather than try to "fix" them.
E.g. Flag activities have an array of objects
We prune the activity when NONE of the objects can be found
Note that the cost of finding and deleting these is ~4x higher than finding and deleting the non-array ones
Only string:
Delete on activities (cost=506573.48..506580.38 rows=0 width=0)
Only Array:
Delete on activities (cost=3570359.68..4276365.34 rows=0 width=0)
(They are still executed separately, so the total cost is the sum of the two)
We add an option to also prune remote activities who don't have existing objects any more they reference.
Rn, we only check for activities who only reference one object, not an array or embeded object.
This adds an option to the prune_objects mix task.
The original way deleted all non-local public posts older than a certain time frame.
Here we add a different query which you can call using the option --keep-threads.
We query from the activities table all context id's where
1. the newest activity with this context is still old
2. none of the activities with this context is is local
3. none of the activities with this context is bookmarked
and delete all objects with these contexts.
The idea is that posts with local activities (posts, replies, likes, repeats...) may be interesting to keep.
Besides that, a post lives in a certain context (the thread), so we keep the whole thread as well.
Caveats:
* ~~Quotes have a different context. Therefore, when someone quotes a post, it's possible the quoted post will still be deleted.~~ fixed in https://akkoma.dev/AkkomaGang/akkoma/pulls/379
* Although undocumented (in docs/docs/administration/CLI_tasks/database.md/#prune-old-remote-posts-from-the-database), the 'normal' delete action still kept old remote non-public posts. I added an option to keep this behaviour, but this also means that you now have to explicitly provide that option. **This could be considered a breaking change!**
* ~~Note that this removes from the objects table, but not from the activities.~~ See https://akkoma.dev/AkkomaGang/akkoma/pulls/427 for that.
Some statistics from explain analyse:
(cost=1402845.92..1933782.00 rows=3810907 width=62) (actual time=2562455.486..2562455.495 rows=0 loops=1)
Planning Time: 505.327 ms
Trigger for constraint chat_message_references_object_id_fkey: time=651939.797 calls=921740
Trigger for constraint deliveries_object_id_fkey: time=52036.009 calls=921740
Trigger for constraint hashtags_objects_object_id_fkey: time=20665.778 calls=921740
Execution Time: 3287933.902 ms
***
**TODO**
1. [x] **Question:** Is it OK to keep it like this in regard to quote posts? If not (ie post quoted by local users should also be kept), should we give quotes the same context as the post they are quoting? (If we don't want to give them the same context, I'll have to see how/if I can do it without being too costly)
* See https://akkoma.dev/AkkomaGang/akkoma/pulls/379
2. [x] **Question:** the "original" query only deletes public posts (this is undocumented, but you can check the code). This new one doesn't care for scope. From the docs I get that the idea is that posts can be refetched when needed. But I have from a trusted source that Pleroma can't refetch non-public posts. I assume that's the reason why they are kept here. I see different options to deal with this
1. ~~We keep it as currently implemented and just don't care about scope with this option~~
2. ~~We add logic to not delete non-public posts either (I'll have to see how costly that becomes)~~
3. We add an extra --keep-non-public parameter. This is technically speaking breakage (you didn't have to provide a param before for this, now you do), but I'm inclined to not care much because it wasn't documented nor tested in the first place.
3. [x] See if we can do the query using Elixir
4. [x] Test on a bigger DB to see that we don't run into a timeout
5. [x] Add docs
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/350
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Some users post posts with spoofed timestamp, and some clients will have issues with certain dates. Tusky for example crashes if the date is any sooner than 1 BCE (“year zero” in the representation).
I limited the range of what is considered a valid date to be somewhere between the years 1583 and 9999 (inclusive).
The numbers have been chosen because:
- ISO 8601 only allows years before 1583 with “mutual agreement”
- Years after 9999 could cause issues with certain clients as well
Co-authored-by: Charlotte 🦝 Delenk <lotte@chir.rs>
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/425
Co-authored-by: darkkirb <lotte@chir.rs>
Co-committed-by: darkkirb <lotte@chir.rs>
See https://akkoma.dev/AkkomaGang/akkoma/pulls/350#issuecomment-6109
When making quotes through Mast-API, they will now have the same context as the quoted post. This also results in them being showed when fetching the thread. I checked Misskey to see how it's there, and they show the quotes there as well, see e.g. <https://mk.toast.cafe/notes/98u1g0tulg>.
An example from Akkoma:
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/379
Reviewed-by: floatingghost <hannah@coffee-and-dreams.uk>
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>
Argos Translate is a Python module for translation and can be used as a command line tool.
This is also the engine for LibreTranslate, for which we already have a module.
Here we can use the engine directly from our server without doing requests to a third party or having to install our own LibreTranslate webservice (obviously you do have to install Argos Translate).
One thing that's currently still missing from Argos Translate is auto-detection of languages (see <https://github.com/argosopentech/argos-translate/issues/9>). For now, when no source language is provided, we just return the text unchanged, supposedly translated from the target language. That way you get a near immediate response in pleroma-fe when clicking Translate, after which you can select the source language from a dropdown.
Argos Translate also doesn't seem to handle html very well. Therefore we give admins the option to strip the html before translating. I made this an option because I'm unsure if/how this will change in the future.
Co-authored-by: ilja <git@ilja.space>
Reviewed-on: https://akkoma.dev/AkkomaGang/akkoma/pulls/351
Co-authored-by: ilja <akkoma.dev@ilja.space>
Co-committed-by: ilja <akkoma.dev@ilja.space>