Google Chrome Frame was a plugin for Internet Explorer, but is no longer
supported. And it was included with http and not https which triggered a
warning on most pods.
Also set the minimum version to 9 to support #6557closes#6751
Also:
* remove ShareablesFromPerson evil-query
* improve multi-stream and aspect-stream queries
* fix logging for recieve
* don't add last 100 public posts to users streams after sharing
* delete share visibility when shareable is deleted
As according to http://openid.net/specs/openid-connect-core-1_0.html#PairwiseAlg:
"If the Client has not provided a value for
sector_identifier_uri in Dynamic Client Registration
[OpenID.Registration], the Sector Identifier used
for pairwise identifier calculation is the host
component of the registered redirect_uri."
The route /api/v0/user/ will now be used as a
non-OIDC route. In other words, the /api/v0/user/
will require the "read" scope while
/api/openid_connect/user_info/ will require the
"openid" scope
Client must now be registered prior to imitating a
call to the token endpoint with the password flow.
Squashed commits:
[fdcef62] Rename authorization endpoint to protected resource endpoint
* add a class for checking pod connectivity
* extend pod model to handle new functionality
* add an admin frontend to list pods and re-trigger checks manually
* add a daily worker to run through all the pods
* add unit tests for most of the new code
refactoring:
- remove unused return-values (were used for caching, which was removed again)
- remove transaction (doesn't help here, added in 2615126)
closes#6068
Since the Facebook API has changed and additional permissions are required for all users on a pod to cross-post, an additional 'authorized' flag is needed to be set for the Facebook service.
This flag allows either all users, one user or no users to use the cross-posting service.
Clarifies the situation for #5923, #5260 and #5085.
closes#5985
Conversation recipient validated
- Conversation foreign recipient not required to be validated.
- Added 'peter' as spec_helper user.
- New fixture, new statistic values.
- Clearer identifier names
opengraph_parser is basically unmainted, issues are ignored
or deliberately closed without fixing. It pollutes the global
namespace and has no verification of correctness.
The opengraph gem has basically the same issues, not really maintained,
unreleased patches on master since over a year, not really smart either.
So I created my own version and while at it, why not strive try to be
complete and robust, although it's still a work in progress.
This also improves general URL detection by parsing them
from the message after stripping markdown.
An additional dependency was added to support
fetching sites that require cookies to work at all.
For the same reason Faraday's default redirect limit was
bumped.
* Wrap it into a transaction
* Use destroy over delete so dependent destroys get triggered
and we thus don't fail on the foreign key constraits
* Check if a photos status message actually exists before accessing
it
* Add missing dependent destroys
I couldn't reproduce what the comment states anymore, so I just removed
it. This fixes a minor issue where html wouldn't be escaped in the
export.
Thanks to A Kai (@sixhundredns) for reporting.
This new class replaces all existing server side message
rendering helpers and is the new global entry point for such
needs. All models with relevant fields now expose an instance
of MessageRenderer for those. MessageRenderer acts as
gateway between the existing processing solutions for markdown,
mentions and tags and provides a very flexible interface for
all output needs. This makes the API to obtain a message
in a certain format clear. As a result of centralizing the
processing a lot of duplication is eliminated. Centralizing
the message processing also makes it clear where to change
its behaviour, add new representations and what options
are already available.
for cross-posting to OAuth-enabled Wordpress.com or Jetpack-enabled Wordpress.org blogs.
Added model for Wordpress service
Added very very basic Wordpress cross-posting functionality.
Added markdown support to post body
Fixed Wordpress::MAX_CHARACTERS problem
cleanup
Added default settings for Wordpress OAuth
Added default settings for Wordpress OAuth
Added Wordpress to configured services spec.
changelog changes
markdown link to their profile (fixes#2516)
add failing spec for #4160 / #2516
extend the spec a bit more
refactor mention handling in a status message
add method for filtering mentions by aspects
wire mention filtering into the status message model, adapt a few tests to
work properly
cosmetic changes
shorten helper methods
add changelog entry
* Dropped all references to Resque
* Moved all jobs under app/workers since that's the Sidekiq convention
* Renamed Jobs module to Worker to match new location
* Adapted all jobs to Sidekiq
* Replaced all enqueue calls with perform_async
* Dropped Resque hacks from specs and features, replaced with
sidekig/testing in RSpec and sidekig/testing/inline in Cucumber
* Updated scripts to start a Sidekiq server
* Inline Sidekiq sinatra app
* Let Sidekiq create the actual Redis instance
* Workaround already initialized constant warnings in service models
* Resolved ToDo in one job definition by creating proper exception clases
for some errors in receiving posts
* Added sidekiq section to configuration to make it completly
configurable to the user
* Add Sidekiq middleware for clean backtraces
* Delay HttpMulti retry to give offline pods a chance to come back up
* Do not retry on GUID already taken and alike errors
* Be graceful about deleted posts in GatherOEmbedData
* Rename and reorganize post fetcher to fix autoloading, also let it use
Faradays default connection so we get nice redirects
* Add initializer to load libs at a central place
* added lib dir to autoload_once paths to increase thread safety
* Moved lib/exceptions.rb to lib/diaspora/ to conform namespacing
Revert "Merge pull request #3968 from marpo60/limit_shareable_from_person_queries"
This reverts commit ddfc558a9b.
This reverts commit 30ed4b4e70, reversing
changes made to f50ce2cb1d.
limiting the fetch of the IDs breaks paginating, there's no quick way to fix that
I left the spec in for future use.
Not ordering the IDs caused incorret ones returned
The spec is totally at the wrong level but I couldn't make
something up that exposed the bug at a deeper level :(
* Throw away old system
* Add new system
* Add new example files
* Replace all calls
* add the most important docs
* Add Specs
* rename disable_ssl_requirement to require_ssl
* cloudfiles isn't used/called in our code
* since community_spotlight.list is only used as enable flag replace it with such one and remove all legacy and irelevant codepaths around it
* die if session secret is unset and on heroku
* First basic infrastructure for version information
This is a fix for public messages, where a malicious pod could spoof a message from someone a user was connected to, as the verified signatures were not checked that the object was also from said sender. This hole only affected public messages, and the private part of code had the correct checks
THX to s-f-s(Stephan Schulz) for reporting and tracking down this issue, and props to Raven24(florian.staudacher@gmx.at) for helping me test the patch
Custom Redcarpet renderer to escape hashtags (but not legitimate headers)
in emails before Markdown processing. Prevents hashtags from being rendered
as H1 headers. This also leaves open the possibility of parsing hashtags
into clickable links in the future.
fixes#3325
* do not raise if profile xrd isn't found
* error out on a ssl error rather than on the unexpected nil value later
* be more verbose about failed xrd fetches
which accepts several server misconfigurations
OpenSSL is very liberal about the order and content of the supplied
cert chain. GnuTLS however is very crucial about it. So to support
GnuTLS we need to tell our community to fix their servers (joindiaspora.com
is broken too). You can check it with
gnutls-cli -V --x506cafile=/etc/ssl/ca-certificates.crt $domain
It will print the certs in the order received and say at the end
if it could be verifed. Note that not only the order is important but
also the content. Many example configurations, especially for Nginx,
include the root cert of the CA in the chain which is wrong.
Note from a GnuTLS maintainer: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=%23573736#29
Revert "Fix federation with GnuTLS by passing the ca_file to Typhoeus"
This reverts commit 640a0181ab.
How did this ever work? Anyway Diaspora now works with libcurl4-gnutls-dev
which already supports SNI in most distributions and is also the default
for many distros. Everybody should switch to it. Do so by installing it and
then do a gem uninstall typhoeus followed by a bundle to compile it with
GnuTLS