This will decouple filenames from shortcodes and
allow more image formats to work instead of only
those included in the auto-load glob. (Albeit we
still saved other formats to disk, wasting space)
Furthermore, this will allow us to make
final URL paths infeasible to predict.
Since 3 commits ago we restrict shortcodes to a subset of
the POSIX Portable Filename Character Set, therefore
this can never have a directory component.
E.g. *key’s emoji URLs typically don’t have file extensions, but
until now we just slapped ".png" at its end hoping for the best.
Furthermore, this gives us a chance to actually reject non-images,
which before was not feasible exatly due to those extension-less URLs
As suggested in b387f4a1c1, only steal
emoji with alphanumerc, dash, or underscore characters.
Also consolidate all validation logic into a single function.
===
Taken from akkoma#703 with cosmetic tweaks
This matches our existing validation logic from Pleroma.Emoji,
and apart from excluding the dot also POSIX’s Portable Filename
Character Set making it always safe for use in filenames.
Mastodon is even stricter also disallowing U+002D HYPEN-MINUS
and requiring at least two characters.
Given both we and Mastodon reject shortcodes excluded
by this anyway, this doesn’t seem like a loss.
Even more than with user uploads, a same-domain proxy setup bears
significant security risks due to serving untrusted content under
the main domain space.
A risky setup like that should never be the default.
Just as with uploads and emoji before, this can otherwise be used
to place counterfeit AP objects or other malicious payloads.
In this case, even if we never assign a priviliged type to content,
the remote server can and until now we just mimcked whatever it told us.
Preview URLs already handle only specific, safe content types
and redirect to the external host for all else; thus no additional
sanitisiation is needed for them.
Non-previews are all delegated to the modified ReverseProxy module.
It already has consolidated logic for building response headers
making it easy to slip in sanitisation.
Although proxy urls are prefixed by a MAC built from a server secret,
attackers can still achieve a perfect id match when they are able to
change the contents of the pointed to URL. After sending an posts
containing an attachment at a controlled destination, the proxy URL can
be read back and inserted into the payload. After injection of
counterfeits in the target server the content can again be changed
to something innocuous lessening chance of detection.
Else malicious emoji packs or our EmojiStealer MRF can
put payloads into the same domain as the instance itself.
Sanitising the content type should prevent proper clients
from acting on any potential payload.
Note, this does not affect the default emoji shipped with Akkoma
as they are handled by another plug. However, those are fully trusted
and thus not in needed of sanitisation.
This actually was already intended before to eradict all future
path-traversal-style exploits and to fix issues with some
characters like akkoma#610 in 0b2ec0ccee. However, Dedupe and
AnonymizeFilename got mixed up. The latter only anonymises the name
in Content-Disposition headers GET parameters (with link_name),
_not_ the upload path.
Even without Dedupe, the upload path is prefixed by an UUID,
so it _should_ already be hard to guess for attackers. But now
we actually can be sure no path shenanigangs occur, uploads
reliably work and save some disk space.
While this makes the final path predictable, this prediction is
not exploitable. Insertion of a back-reference to the upload
itself requires pulling off a successfull preimage attack against
SHA-256, which is deemed infeasible for the foreseeable futures.
Dedupe was already included in the default list in config.exs
since 28cfb2c37a, but this will get overridde by whatever the
config generated by the "pleroma.instance gen" task chose.
Upload+delete tests running in parallel using Dedupe might be flaky, but
this was already true before and needs its own commit to fix eventually.
The lack thereof enables spoofing ActivityPub objects.
A malicious user could upload fake activities as attachments
and (if having access to remote search) trick local and remote
fedi instances into fetching and processing it as a valid object.
If uploads are hosted on the same domain as the instance itself,
it is possible for anyone with upload access to impersonate(!)
other users of the same instance.
If uploads are exclusively hosted on a different domain, even the most
basic check of domain of the object id and fetch url matching should
prevent impersonation. However, it may still be possible to trick
servers into accepting bogus users on the upload (sub)domain and bogus
notes attributed to such users.
Instances which later migrated to a different domain and have a
permissive redirect rule in place can still be vulnerable.
If — like Akkoma — the fetching server is overly permissive with
redirects, impersonation still works.
This was possible because Plug.Static also uses our custom
MIME type mappings used for actually authentic AP objects.
Provided external storage providers don’t somehow return ActivityStream
Content-Types on their own, instances using those are also safe against
their users being spoofed via uploads.
Akkoma instances using the OnlyMedia upload filter
cannot be exploited as a vector in this way — IF the
fetching server validates the Content-Type of
fetched objects (Akkoma itself does this already).
However, restricting uploads to only multimedia files may be a bit too
heavy-handed. Instead this commit will restrict the returned
Content-Type headers for user uploaded files to a safe subset, falling
back to generic 'application/octet-stream' for anything else.
This will also protect against non-AP payloads as e.g. used in
past frontend code injection attacks.
It’s a slight regression in user comfort, if say PDFs are uploaded,
but this trade-off seems fairly acceptable.
(Note, just excluding our own custom types would offer no protection
against non-AP payloads and bear a (perhaps small) risk of a silent
regression should MIME ever decide to add a canonical extension for
ActivityPub objects)
Now, one might expect there to be other defence mechanisms
besides Content-Type preventing counterfeits from being accepted,
like e.g. validation of the queried URL and AP ID matching.
Inserting a self-reference into our uploads is hard, but unfortunately
*oma does not verify the id in such a way and happily accepts _anything_
from the same domain (without even considering redirects).
E.g. Sharkey (and possibly other *keys) seem to attempt to guard
against this by immediately refetching the object from its ID, but
this is easily circumvented by just uploading two payloads with the
ID of one linking to the other.
Unfortunately *oma is thus _both_ a vector for spoofing and
vulnerable to those spoof payloads, resulting in an easy way
to impersonate our users.
Similar flaws exists for emoji and media proxy.
Subsequent commits will fix this by rigorously sanitising
content types in more areas, hardening our checks, improving
the default config and discouraging insecure config options.
Mastodon at the very least seems to prevent the creation of emoji with
dots in their name (and refuses to accept them in federation). It feels
like being cautious in what we accept is reasonable here.
Colons are the emoji separator and so obviously should be blocked.
Perhaps instead of filtering out things like this we should just
do a regex match on `[a-zA-Z0-9_-]`? But that's plausibly a decision
for another day
Perhaps we should also have a centralised "is this a valid emoji shortcode?"
function
This partly reverts 1d884fd914
while fixing both the issue it addressed and the issue it caused.
The above commit successfully fixed OpenGraph metadata tags
which until then always showed the user bio instead of post content
by handing the activities AP ID as url to the Metadata builder
_instead_ of passing the internal ID as activity_id.
However, in doing so the commit instead inflicted this very problem
onto Twitter metadata tags which ironically are used by akkoma-fe.
This is because while the OpenGraph builder wants an URL as url,
the Twitter builder needs the internal ID to build the URL to the
embedded player for videos and has no URL property.
Thanks to twpol for tracking down this root cause in #644.
Now, once identified the problem is simple, but this simplicity
invites multiple possible solutions to bikeshed about.
1. Just pass both properties to the builder and let them pick
2. Drop the url parameter from the OpenGraph builder and instead
a) build static-fe URL of the post from the ID (like Twitter)
b) use the passed-in object’s AP ID as an URL
Approach 2a has the disadvantage of hardcoding the expected URL outside
the router, which will be problematic should it ever change.
Approach 2b is conceptually similar to how the builder works atm.
However, the og:url is supposed to be a _permanent_ ID, by changing it
we might, afaiui, technically violate OpenGraph specs(?). (Though its
real-world consequence may very well be near non-existent.)
This leaves just approach 1, which this commit implements.
Albeit it too is not without nits to pick, as it leaves the metadata
builders with an inconsistent interface.
Additionally, this will resolve the subotpimal Discord previews for
content-less image posts reported in #664.
Discord already prefers OpenGraph metadata, so it’s mostly unaffected.
However, it appears when encountering an explicitly empty OpenGraph
description and a non-empty Twitter description, it replaces just the
empty field with its Twitter counterpart, resulting in the user’s bio
slipping into the preview.
Secondly, regardless of any OpenGraph tags, Discord uses twitter:card to
decide how prominently images should be, but due to the bug the card
type was stuck as "summary", forcing images to always remain small.
Root cause identified by: twpol
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/644
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/664
Currently our own frontend doesn’t show backgrounds of other users, this
property is already publicly readable via REST API and likely was always
intended to be shown and federated.
Recently Sharkey added support for profile backgrounds and
immediately made them federate and be displayed to others.
We use the same AP field as Sharkey here which should make
it interoperable both ways out-of-the-box.
Ref.: 4e64397635
Fixes misspelling and omission of and example in commit
0cfd5b4e89 which added the
status_ttl_property. This was the only place this commit
referred to the property as note_ttl_days.
Partially fixes the omitted schema update of the instance metadata addition
from commit b7e8ce2350. A proper full schema
for nodeinfo is still missing.
OTP’s default SSL/TLS settings are rather restricitive
and in particular do not use system CA certs.
In our case using system CA certs is virtually always desired
and the lack of it leads to non-obvious errors. Manually configuring
system CA certs from in-database config also isn’t straightforward.
Furthermore, gen_smtp uses a different set of connection options
for direct SSL/TLS and a later TLS upgrade providing additional
confusion and complexity in how to configure this.
Thus provide some suitable defaults for sending SMTP emails.
Everything can still be overriden by admins if necessary.
Note: defaults are not appended when validating the config
in hopes of improving the error message (as the required relay key
is already accessed to generate defaults for optional fields)
Fixes: https://akkoma.dev/AkkomaGang/akkoma/issues/660
With kilobyte the resulting numbers got too large and were cut off
in the charts, making them useless. However, even an idle Akkoma
server’s memory usage is in the lower hundreths of megabytes, so
we don’t need this much precision to begin with for the dashboard.
Other metric users might prefer base units and can handle scaling in a
smarter way, so keep this configurable.
The spec was copied from another endpoint, including the operation id,
leading to scrubbing the valid parameters from the request and simply
not working.
the previous code passed a state parameter to ueberauth with info
about where to go after the user logged in, etc.
since ueberauth 0.7, this parameter is ignored and oauth state is used
for actual CSRF reasons.
we now set a cookie with the state we need to keep track of, and read
it once the callback happens.