Project

General

Profile

Some thoughts

Added by privacybrowser user over 4 years ago

Interesting and timely project!

Would be great to be able to configure the TLS stack - disable 0rtt (assuming it is enabled), session tickets, session IDs, specific ciphers/tls versions (assuming session tickets/ids that are much more involved with core parts of TLS1.3 cannot be disabled), some info on HSTS - is HSTS info stored for all sites browsed or not - like SiteSecurityServiceState.txt in Firefox - this can be used as a fingerprinting vector.

Also having more granularity on 3rd party requests to allow blocking of subdomains e.g. so requests to subdomain.example.com can be considered 3p from example.com or www.example.com.

Being able to whitelist/blacklist specific requests and types of requests (e.g. css,images,script,inline-script,wss,xhr,csp,frames,etc (im sure there are more), and customize/disable certain headers - e.g. accept header, accept encoding header (since these two vary by browser and if a user forges useragent, it would probably warrant changing these headers to match the useragent), disable X-Requested-With and especially be able to disable referrer headers...

Equally being able to intercept api calls by specific scripts on a given page, and to be able to exhaustively disable specific JS APIs would be awesome (but i know a massive job and potentially resource-intensive)...canvas fingerprinting protection, etc...dom iddb,websockets, webrtc, etc...

I know some of these are probably impossible until Privacy Webview is finished, but would be great if some of these options were available (especially the ublock origin style request white/blacklisting and the referer control, and perhaps the other header stuff).

Hope I don't sound too much like the typical unappreciative and demanding foss user...

Cool project and much nicer experience than fennec/fenix and easier to manage than chromium!

Thanks for making this!


Replies (3)

RE: Some thoughts - Added by Soren Stoutner over 4 years ago

privacybrowser user wrote:

Would be great to be able to configure the TLS stack - disable 0rtt (assuming it is enabled), session tickets, session IDs, specific ciphers/tls versions (assuming session tickets/ids that are much more involved with core parts of TLS1.3 cannot be disabled), some info on HSTS - is HSTS info stored for all sites browsed or not - like SiteSecurityServiceState.txt in Firefox - this can be used as a fingerprinting vector.

As a general rule, I think it is important for Privacy Browser to remain a browser and not try to re-implement features that are part of the operating system. So, for example, I don't intend to ship a modified SSL library with Privacy Browser. Some of these settings, like dealing with 0-RTT, session tickets, and TLS session IDs probably fit into that category.

Session IDs, as the phrase is more commonly used in web development, are just cookies with no expiration date (meaning they usually get deleted as soon as the session closes. As such, they only come into play when cookies are enabled.

Being able to specify TLS version and cipher suites is a planned feature, but it will require Privacy WebView in the 4.x series. https://redmine.stoutner.com/issues/210

Getting rid of HSTS is a good idea, as it has no positive benefits for Privacy Browser but does have negative privacy implications. I will disable it for the next release. https://redmine.stoutner.com/issues/480

Also having more granularity on 3rd party requests to allow blocking of subdomains e.g. so requests to subdomain.example.com can be considered 3p from example.com or www.example.com.

This is a good idea, but the immediate benefit isn't readily apparent to me. Browsing with all third-party requests disabled breaks so many websites that it is quite difficult to use it as a default settings. Over time, I would like to grow Privacy Browser to the point where it has at least 20% market share world wide. At that point, I intend to make blocking of third-party requests the default behavior. If I have that much market share, web developers will change their methods to make websites work without third-party requests. Also at that point, it would make sense for Privacy Browser to have an a more constricted third-party blocking option.

https://redmine.stoutner.com/issues/481

Being able to whitelist/blacklist specific requests and types of requests (e.g. css,images,script,inline-script,wss,xhr,csp,frames,etc (im sure there are more), and customize/disable certain headers - e.g. accept header, accept encoding header (since these two vary by browser and if a user forges useragent, it would probably warrant changing these headers to match the useragent), disable X-Requested-With and especially be able to disable referrer headers...

All of these ideas require Privacy WebView in the 4.x series. A number of them already have feature requests as can be seen at https://redmine.stoutner.com/projects/privacy-browser/issues?utf8=%E2%9C%93&set_filter=1&sort=id%3Adesc&f%5B%5D=status_id&op%5Bstatus_id%5D=o&f%5B%5D=priority_id&op%5Bpriority_id%5D=%3D&v%5Bpriority_id%5D%5B%5D=10&f%5B%5D=&c%5B%5D=tracker&c%5B%5D=status&c%5B%5D=priority&c%5B%5D=subject&c%5B%5D=assigned_to&c%5B%5D=updated_on&group_by=&t%5B%5D=. Feel free to add any additional feature requests there (one per item please).

Equally being able to intercept api calls by specific scripts on a given page, and to be able to exhaustively disable specific JS APIs would be awesome (but i know a massive job and potentially resource-intensive)...canvas fingerprinting protection, etc...dom iddb,websockets, webrtc, etc...

Again, these type of features all require Privacy WebView. Note that fine grained JavaScript controls are already a planned feature. It will be a time consuming and difficult task, but given that JavaScript is the single most dangerous destroyer of online privacy, larger than all other dangers combined, it will be worth the effort. https://redmine.stoutner.com/issues/270

I appreciate the time you took to write this post. All of these are good ideas. Feel free to add any bug report or feature requests as you see fit, but make sure that each one only covers one item. That makes it easier to track and organize progress.

RE: Some thoughts - Added by privacybrowser user over 4 years ago

Thanks for the quick response!

I understand the reluctance to re-implement core OS features, however 0-RTT has been criticized for being subject to replay attacks, and tls session tickets and session ids are a known tracking vector - see TorProject's modifications to Firefox in the past. TLS fingerprinting I think is a massively underappreciated tracking vector, and goes far beyond even what I have mentioned.

The reason I brought up the idea of more granular filtering of 3rd party requests is that it could allow you to keep requests limited to a specific AS. I personally find it pretty easy to unbreak sites when browsing with 3p-block and 3p-frame block and RequestPolicy in Firefox, and a potential solution is to show an icon where images or assets are missing from a page - this is what RequestPolicy used to do. If I feel like browsing example.com, which I know resolves to a given IP on a given AS, but then a network request to assets.example.com makes a request to another AS that I'd rather not talk to - e.g. asset CDN hosted on a cloud service but main site that isn't. If I can block requests to assets.example.com but selectively allow other requests as I see fit. e.g. if I use gmail in an email client and then my browser makes a request to a GCP-hosted subdomain then there is evidence to suggest Google has the capability to use this data with novel, passive, non-JS fingerprinting combined with an IP address to correlate the GCP request with the gmail account requests.

Equally, being able to pre-block request types would be useful imho. In Firefox with uBlock I can block all inline-script, script, XHR, WSS, and other more esoteric requests. Equally it is possible to filter by request types for a specific subdomain or domain, or any regex set up using static filtering.

Being able to selectively whitelist scripts on a page, then observing requests made and allowing only absolutely functionality-critical XHR/WSS/image/other js-initiated requests through is something I have found quite useful to prevent stupid phone home requests. Of course some of the really smart guys out there make it impossible (e.g. Google) by bundling their javascript into one big minified, highly obfuscated and often encrypted blob making selective script blocking less useful, and combine tracking requests with functionality-critical requests in an encrypted blob either as a query parameter in or as part of an image url or in single multipurpose XHR requests. However, plenty of other sites out there still do their tracking simply using XHRs which have obvious names, with the tracking in the headers or something, but often the functionality of the site isn't totally broken by blocking XHRs entirely if you execute (albeit a massive security risk...given what RowHammer and the plethora of JS-based exploits alone has shown) the scripts but don't allow js-initiated requests, etc...

I will try to make some dedicated feature requests for some of these.

Nice to see someone in the mobile browser space actually working on this stuff, especially given Mozilla's current direction. The move to webextensions has introduced yet more attack surface despite their claims to the contrary...and endless regressions

Ultimately Chromium's security model is the best out there with great sandboxing results coming out of Google's security team. If securely compiled (this alone can be a headache) you have a relatively secure browser, but then you have to deal with the fact that Chromium is very complex, and constantly changing and that Google constantly adds "features" that relate to bloatware Google adds to Chrome, meaning actually maintaining a stable, secure, privacy-conscious implementation of Chromium is an absolute nightmare.

This does make me wonder a bit about how your fork of WebView will keep up with upstream development.

This is an inspiring project, and I've written too much already, so I'll stop now :)

RE: Some thoughts - Added by Soren Stoutner over 4 years ago

privacybrowser user wrote:

I understand the reluctance to re-implement core OS features, however 0-RTT has been criticized for being subject to replay attacks, and tls session tickets and session ids are a known tracking vector - see TorProject's modifications to Firefox in the past. TLS fingerprinting I think is a massively underappreciated tracking vector, and goes far beyond even what I have mentioned.

I don't disagree with you at all about the negative privacy implications of these things. But they should be addressed in the TCP/IP stack in the OS, not in the browser. Attempting to reimplement the TCP/IP stack in the browser significantly increases the attack surface of the browser. And, because I do not have the required cryptographic background to reimplement TLS or other protocols involved, I would invariably introduce more bugs than I would solve. You can see https://www.stoutner.com/minimizing-privacy-browsers-attack-surface/ for a further discussion on my philosophy of minimizing Privacy Browser’s attack surface.

The reason I brought up the idea of more granular filtering of 3rd party requests is that it could allow you to keep requests limited to a specific AS. I personally find it pretty easy to unbreak sites when browsing with 3p-block and 3p-frame block and RequestPolicy in Firefox, and a potential solution is to show an icon where images or assets are missing from a page - this is what RequestPolicy used to do. If I feel like browsing example.com, which I know resolves to a given IP on a given AS, but then a network request to assets.example.com makes a request to another AS that I'd rather not talk to - e.g. asset CDN hosted on a cloud service but main site that isn't. If I can block requests to assets.example.com but selectively allow other requests as I see fit. e.g. if I use gmail in an email client and then my browser makes a request to a GCP-hosted subdomain then there is evidence to suggest Google has the capability to use this data with novel, passive, non-JS fingerprinting combined with an IP address to correlate the GCP request with the gmail account requests.

These types of dynamic blocklist interactions cannot be accomplished with WebView. They will have to wait for Privacy WebView.

Equally, being able to pre-block request types would be useful imho. In Firefox with uBlock I can block all inline-script, script, XHR, WSS, and other more esoteric requests. Equally it is possible to filter by request types for a specific subdomain or domain, or any regex set up using static filtering.

This type of blocking also will have to wait for Privacy WebView. When filtering a request, only the URL is presented. It isn’t possible to know HTML tag requested the URL.

Being able to selectively whitelist scripts on a page, then observing requests made and allowing only absolutely functionality-critical XHR/WSS/image/other js-initiated requests through is something I have found quite useful to prevent stupid phone home requests. Of course some of the really smart guys out there make it impossible (e.g. Google) by bundling their javascript into one big minified, highly obfuscated and often encrypted blob making selective script blocking less useful, and combine tracking requests with functionality-critical requests in an encrypted blob either as a query parameter in or as part of an image url or in single multipurpose XHR requests. However, plenty of other sites out there still do their tracking simply using XHRs which have obvious names, with the tracking in the headers or something, but often the functionality of the site isn't totally broken by blocking XHRs entirely if you execute (albeit a massive security risk...given what RowHammer and the plethora of JS-based exploits alone has shown) the scripts but don't allow js-initiated requests, etc...

This is why your best bet is to simply browse with JavaScript disabled. :)

Ultimately Chromium's security model is the best out there with great sandboxing results coming out of Google's security team. If securely compiled (this alone can be a headache) you have a relatively secure browser, but then you have to deal with the fact that Chromium is very complex, and constantly changing and that Google constantly adds "features" that relate to bloatware Google adds to Chrome, meaning actually maintaining a stable, secure, privacy-conscious implementation of Chromium is an absolute nightmare.

This does make me wonder a bit about how your fork of WebView will keep up with upstream development.

Privacy WebView will consist of a series of patch files that make minimally invasive changes to WebView, mostly making some methods NOOP (https://www.urbandictionary.com/define.php?term=noop) and adding hooks to other methods, so that internal functionality is exposed to Privacy Browser.

These patches will have to be rebased for each release of WebView. As such, I should stress that these will have to be MINIMALLY INVASIVE patches. Anything that would involve a large rewrite, even if it is a desired feature, will not be implemented because the maintenance burden would be too significant.

    (1-3/3)