Hi,
I've been running Squid successfully on CentOS 7 (and before that on 6 and 5), and it's always been running nicely. I've been using it mostly as a transparent proxy filter in school networks.
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
Cheers from the snowy South of France,
Niki
On 2018-02-28 06:23 PM, Nicolas Kovacs wrote:
Hi,
I've been running Squid successfully on CentOS 7 (and before that on 6 and 5), and it's always been running nicely. I've been using it mostly as a transparent proxy filter in school networks.
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
Cheers from the snowy South of France,
Niki
I recommend everyone in France to spend their money on a school with free internet.
please tell us the name of your school's.
the https exist's because we want freedom and privacy on internet.
Le 28/02/2018 à 22:32, Itamar Reis Peixoto a écrit :
I recommend everyone in France to spend their money on a school with free internet.
I'm not sure I understand. Our students sure don't pay for accessing the Internet.
please tell us the name of your school's.
the https exist's because we want freedom and privacy on internet.
Indeed. Except we have to stick to the law (article 227-24 from the French penal code) and provide filtered internet access so underage kids don't watch porn, build bombs or join the Jihad. Like pretty much every school, public library or administration in Western Europe.
Cheers,
Niki
Hello Nicolas,
On Wed, 28 Feb 2018 23:38:19 +0100 Nicolas Kovacs info@microlinux.fr wrote:
Le 28/02/2018 à 22:32, Itamar Reis Peixoto a écrit :
I recommend everyone in France to spend their money on a school with free internet.
I'm not sure I understand. Our students sure don't pay for accessing the Internet.
please tell us the name of your school's.
the https exist's because we want freedom and privacy on internet.
Indeed. Except we have to stick to the law (article 227-24 from the French penal code) and provide filtered internet access so underage kids don't watch porn, build bombs or join the Jihad. Like pretty much every school, public library or administration in Western Europe.
You should not bother replying to a troll, you're just feeding it ;-).
Regards,
On Wed, Feb 28, 2018 at 10:23:31PM +0100, Nicolas Kovacs wrote:
Hi,
I've been running Squid successfully on CentOS 7 (and before that on 6 and 5), and it's always been running nicely. I've been using it mostly as a transparent proxy filter in school networks.
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
I did some experiments ~2 weeks ago. It worked, but I still need to work on the certificates. Squid will re-issue certificates for those connections that it intercepts, and if the browser doesn't recognize the CA, it's going to scream out loud. For the test, I imported my test CA in the browser and then was completely transparent. Not sure if there is a way to avoid this. I hope not, actually.
Marcel
Le 28/02/2018 à 22:43, Marcelo Ricardo Leitner a écrit :
I did some experiments ~2 weeks ago. It worked, but I still need to work on the certificates. Squid will re-issue certificates for those connections that it intercepts, and if the browser doesn't recognize the CA, it's going to scream out loud. For the test, I imported my test CA in the browser and then was completely transparent. Not sure if there is a way to avoid this. I hope not, actually.
If you have any documentation, I'd be grateful for that.
On a more general note, I'm not a lamer for RTFM. It just seems that there's too much information out there on the subject, and everyone seems to be hacking together his own thing. So I'm looking for something that just works, even if it means I have to do some extensive reading.
Cheers,
Niki
On Wed, Feb 28, 2018 at 06:43:50PM -0300, Marcelo Ricardo Leitner wrote:
On Wed, Feb 28, 2018 at 10:23:31PM +0100, Nicolas Kovacs wrote:
Hi,
I've been running Squid successfully on CentOS 7 (and before that on 6 and 5), and it's always been running nicely. I've been using it mostly as a transparent proxy filter in school networks.
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
I did some experiments ~2 weeks ago. It worked, but I still need to work on the certificates. Squid will re-issue certificates for those connections that it intercepts, and if the browser doesn't recognize the CA, it's going to scream out loud. For the test, I imported my test CA in the browser and then was completely transparent. Not sure if there is a way to avoid this. I hope not, actually.
https://smoothnet.org/squid-proxy-with-ssl-bump/ was of good help to me, btw.
Marcelo
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
https://blog.microlinux.fr/squid-https-centos/
Cheers,
Niki
Nice, thanks for sharing.
You could probably just drop your CA cert in the filesystem and run a couple of commands to get it imported, rather than having to import the CA in the browsers individually. You could probably deliver it via yum/rpm or better yet, ansible or even some shell script.
-- Sent from the Delta quadrant using Borg technology!
Nux! www.nux.ro
----- Original Message -----
From: "Nicolas Kovacs" info@microlinux.fr To: "CentOS mailing list" centos@centos.org Sent: Monday, 5 March, 2018 12:04:59 Subject: Re: [CentOS] Squid and HTTPS interception on CentOS 7 ?
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
https://blog.microlinux.fr/squid-https-centos/
Cheers,
Niki
-- Microlinux - Solutions informatiques durables 7, place de l'église - 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32 _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Le 05/03/2018 à 13:30, Nux! a écrit :
You could probably just drop your CA cert in the filesystem and run a couple of commands to get it imported, rather than having to import the CA in the browsers individually. You could probably deliver it via yum/rpm or better yet, ansible or even some shell script.
I will have to use this in environments with mainly Windows, OS X and iOS clients. I'm still thinking about how to do this, but I guess I'll just setup a local web page on the server, with a link to download the certificate file and short instructions on how to install it on the most common browsers (Internet Explorer, Edge, Firefox, Chrome, Safari, ...).
Niki
On 03/05/18 06:34, Nicolas Kovacs wrote:
Le 05/03/2018 à 13:30, Nux! a écrit :
You could probably just drop your CA cert in the filesystem and run a couple of commands to get it imported, rather than having to import the CA in the browsers individually. You could probably deliver it via yum/rpm or better yet, ansible or even some shell script.
I will have to use this in environments with mainly Windows, OS X and iOS clients. I'm still thinking about how to do this, but I guess I'll just setup a local web page on the server, with a link to download the certificate file and short instructions on how to install it on the most common browsers (Internet Explorer, Edge, Firefox, Chrome, Safari, ...).
Sorry, I missed the beginning of this thread. This sounds to me like running one's own Certification Authority. I did that a while ago for over a decade. However, these days one may consider
- you will have to run web server to have certificate signed by them, but pointing other services to use that same certificate/secret key pair will work.
Just my $0.02
Valeri
Niki
Once upon a time, Valeri Galtsev galtsev@kicp.uchicago.edu said:
- you will have to run web server to have certificate signed by
them
Not necessarily - we do most of our Let's Encrypt validation with DNS rather than HTTP.
The certificate should have *CA:true* set for act a CA for dynamic signing certificates by Squid.
Most probably, Let's Encrypt will ignore this constraint in CSR.
2018-03-05 12:33 GMT-03:00 Chris Adams linux@cmadams.net:
Once upon a time, Valeri Galtsev galtsev@kicp.uchicago.edu said:
- you will have to run web server to have certificate signed by
them
Not necessarily - we do most of our Let's Encrypt validation with DNS rather than HTTP. -- Chris Adams linux@cmadams.net _______________________________________________ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Le 05/03/2018 à 16:30, Valeri Galtsev a écrit :
Sorry, I missed the beginning of this thread. This sounds to me like running one's own Certification Authority. I did that a while ago for over a decade. However, these days one may consider
- you will have to run web server to have certificate signed by them,
but pointing other services to use that same certificate/secret key pair will work.
I do use LetsEncrypt for all my public certificates. But I can't use it on a local machine with a hostname like server.company.lan. This is simply not possible.
Niki
On 03/05/18 10:21, Nicolas Kovacs wrote:
Le 05/03/2018 à 16:30, Valeri Galtsev a écrit :
Sorry, I missed the beginning of this thread. This sounds to me like running one's own Certification Authority. I did that a while ago for over a decade. However, these days one may consider
- you will have to run web server to have certificate signed by them,
but pointing other services to use that same certificate/secret key pair will work.
I do use LetsEncrypt for all my public certificates. But I can't use it on a local machine with a hostname like server.company.lan. This is simply not possible.
Yes, it is not. They do verify on publicly accessible server that that host is the one you have assess to, and certainly no CA authority will sign certificate for private address space. I missed the beginning of the thread which was edited away from what I was replying to...
Valeri
Niki
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
On Monday, March 5, 2018 7:23:53 AM CST Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
Although not really related to CentOS, I do have some thoughts on this. I used to work in the IT department of a public library. One of the big considerations at a library is patron privacy. We went to great lengths to NOT record what web sites were visited by our patrons. We also deny requests from anyone to find out what books a patron has checked out.
The library is required by law to provide web filtering, mainly because we have public-use computers which are used by children. For http this is easy. Https is, as this discussion reveals, a different animal.
We started to set up a filter which would run directly on our router (Juniper SRX-series) using EWF software. It quickly became apparent that any kind of https filtering requires a MITM attack. We were basically decrypting the patron's web traffic on our router, then encrypting it again with a different cert.
When we realized what it would take, we had a HUGE internal discussion about how to proceed. Yeah, the lawyers were all over it! In the end we decided to not attempt to filter https traffic except by whatever was not encrypted. Basically that means web site names.
Our test case was the Playboy web site. They are available on https, but they do not automatically redirect http to https. If you open playboy [dot] com with no protocol specified, it goes over http. Our existing filter blocked that. However, if you open https[colon]// playboy [dot] com, it goes straight in. The traffic never goes over http, so the filter on the router never processes it.
Security by obscurity ... It was the best we could do without violating our own policies on patron privacy.
Starting with version 3.5 of Squid, was introduced a new feature named "*SslBump Peek and Splice*".
With this functionality, Squid is able to intercept HTTPS traffic transparently (with exceptions, of course).
This manner, Squid, with spike, is able to logging HTTPS traffic and apply directives like dstdomain on HTTPS traffic without need of a auto-signed CA.
This resource of Squid is the same functionality available on apliances like Sonicwall, Fortigate, Checkpoint, and etc.
A example of config:
http_port 80 intercept https_port 443 intercept ssl-bump cert=/etc/squid3/ssl/ca/intermediate/certs/wilcard.pem key=/etc/squid3/ssl/ca/intermediate/private/wildcard.key generate-host-certificates=off version=4 options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE cache_log /var/log/squid3/cache.log access_log daemon:/var/log/squid3/access.log squid netdb_filename stdio:/var/log/squid3/netdb.state sslcrtd_program /usr/libexec/ssl_crtd -s /var/log/squid3/ssl_db -M 4MB sslcrtd_children 1 startup=1 idle=1 cache_effective_user proxy cache_effective_group proxy pinger_enable off dns_v4_first on acl HTTPS dstdomain "/etc/squid3/https" acl BLOCK url_regex "(torrent)|sex(y|o)" cache deny all ssl_bump bump HTTPS ssl_bump splice all http_access deny BLOCK http_access allow all
PS: the use of "ssl-bump" is only to satisfy de Squid parser.
Best clarifications: https://wiki.squid-cache.org/Features/SslPeekAndSplice
Att,
2018-03-05 11:34 GMT-03:00 Bill Gee bgee@campercaver.net:
On Monday, March 5, 2018 7:23:53 AM CST Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
Although not really related to CentOS, I do have some thoughts on this. I used to work in the IT department of a public library. One of the big considerations at a library is patron privacy. We went to great lengths to NOT record what web sites were visited by our patrons. We also deny requests from anyone to find out what books a patron has checked out.
The library is required by law to provide web filtering, mainly because we have public-use computers which are used by children. For http this is easy. Https is, as this discussion reveals, a different animal.
We started to set up a filter which would run directly on our router (Juniper SRX-series) using EWF software. It quickly became apparent that any kind of https filtering requires a MITM attack. We were basically decrypting the patron's web traffic on our router, then encrypting it again with a different cert.
When we realized what it would take, we had a HUGE internal discussion about how to proceed. Yeah, the lawyers were all over it! In the end we decided to not attempt to filter https traffic except by whatever was not encrypted. Basically that means web site names.
Our test case was the Playboy web site. They are available on https, but they do not automatically redirect http to https. If you open playboy [dot] com with no protocol specified, it goes over http. Our existing filter blocked that. However, if you open https[colon]// playboy [dot] com, it goes straight in. The traffic never goes over http, so the filter on the router never processes it.
Security by obscurity ... It was the best we could do without violating our own policies on patron privacy.
-- Bill Gee
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 03/05/18 08:34, Bill Gee wrote:
On Monday, March 5, 2018 7:23:53 AM CST Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
Although not really related to CentOS, I do have some thoughts on this. I used to work in the IT department of a public library. One of the big considerations at a library is patron privacy. We went to great lengths to NOT record what web sites were visited by our patrons. We also deny requests from anyone to find out what books a patron has checked out.
I bet, your servers never embedded links to anything external. If it is external link, it is requested to open in new browser window. No part of the page should need external (not living on our server) content. That was the way we did it since forever.
It sounds like I will have to fight soon against "google-analytics" glued into each page of our websites. It is amazing that people who have no knowledge rule technical aspects of IT in many places...
Valeri
The library is required by law to provide web filtering, mainly because we have public-use computers which are used by children. For http this is easy. Https is, as this discussion reveals, a different animal.
We started to set up a filter which would run directly on our router (Juniper SRX-series) using EWF software. It quickly became apparent that any kind of https filtering requires a MITM attack. We were basically decrypting the patron's web traffic on our router, then encrypting it again with a different cert.
When we realized what it would take, we had a HUGE internal discussion about how to proceed. Yeah, the lawyers were all over it! In the end we decided to not attempt to filter https traffic except by whatever was not encrypted. Basically that means web site names.
Our test case was the Playboy web site. They are available on https, but they do not automatically redirect http to https. If you open playboy [dot] com with no protocol specified, it goes over http. Our existing filter blocked that. However, if you open https[colon]// playboy [dot] com, it goes straight in. The traffic never goes over http, so the filter on the router never processes it.
Security by obscurity ... It was the best we could do without violating our own policies on patron privacy.
Valeri Galtsev wrote:
On 03/05/18 08:34, Bill Gee wrote:
On Monday, March 5, 2018 7:23:53 AM CST Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
Although not really related to CentOS, I do have some thoughts on this. I used to work in the IT department of a public library. One of the big considerations at a library is patron privacy. We went to great lengths to NOT record what web sites were visited by our patrons. We also deny requests from anyone to find out what books a patron has checked out.
I bet, your servers never embedded links to anything external. If it is external link, it is requested to open in new browser window. No part of the page should need external (not living on our server) content. That was the way we did it since forever.
It sounds like I will have to fight soon against "google-analytics" glued into each page of our websites. It is amazing that people who have no knowledge rule technical aspects of IT in many places...
Yes, why would students be allowed to contact such sites? One could argue which is worse: Being spied upon by trackers and having their privacy taken away to allow the manipulation of the unaware student by ruthless entities, or allowing the students to follow their natural desire to explore their sexuality, which my lead them to watching porn.
There isn´t even a beginning of an understanding what kind of damage might be done with the information gathered and by getting people used to having no privacy, and protection against it is severely lacking. Are the students capable of deciding whether they want to be the subjects of 100% surveillance or not, do they understand what it means, how well are they being informed about how to protect themselves against it, and do they have the means to do it?
Am 05.03.2018 um 15:34 schrieb Bill Gee bgee@campercaver.net:
On Monday, March 5, 2018 7:23:53 AM CST Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
Although not really related to CentOS, I do have some thoughts on this. I used to work in the IT department of a public library. One of the big considerations at a library is patron privacy. We went to great lengths to NOT record what web sites were visited by our patrons. We also deny requests from anyone to find out what books a patron has checked out.
The library is required by law to provide web filtering, mainly because we have public-use computers which are used by children. For http this is easy. Https is, as this discussion reveals, a different animal.
We started to set up a filter which would run directly on our router (Juniper SRX-series) using EWF software. It quickly became apparent that any kind of https filtering requires a MITM attack. We were basically decrypting the patron's web traffic on our router, then encrypting it again with a different cert.
When we realized what it would take, we had a HUGE internal discussion about how to proceed. Yeah, the lawyers were all over it! In the end we decided to not attempt to filter https traffic except by whatever was not encrypted. Basically that means web site names.
Our test case was the Playboy web site. They are available on https, but they do not automatically redirect http to https. If you open playboy [dot] com with no protocol specified, it goes over http. Our existing filter blocked that. However, if you open https[colon]// playboy [dot] com, it goes straight in. The traffic never goes over http, so the filter on the router never processes it.
Security by obscurity ... It was the best we could do without violating our own policies on patron privacy.
All browsers sent "server_name" [*] in there https requests. That is the domain part of the URI. So, you can identify the requested https site without decrypting (because its "lets call it a header" that includes this information) and without damaging the privacy.
[*] https://tools.ietf.org/html/rfc6066
-- LF
Wouldn't filtering the DNS be more practical?
On 5 March 2018 at 18:57, Leon Fauster leonfauster@googlemail.com wrote:
Am 05.03.2018 um 15:34 schrieb Bill Gee bgee@campercaver.net:
On Monday, March 5, 2018 7:23:53 AM CST Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be
unpractical
in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
-- LF
Although not really related to CentOS, I do have some thoughts on this.
I
used to work in the IT department of a public library. One of the big considerations at a library is patron privacy. We went to great lengths
to
NOT record what web sites were visited by our patrons. We also deny
requests
from anyone to find out what books a patron has checked out.
The library is required by law to provide web filtering, mainly because
we
have public-use computers which are used by children. For http this is
easy.
Https is, as this discussion reveals, a different animal.
We started to set up a filter which would run directly on our router
(Juniper
SRX-series) using EWF software. It quickly became apparent that any
kind of
https filtering requires a MITM attack. We were basically decrypting the patron's web traffic on our router, then encrypting it again with a
different
cert.
When we realized what it would take, we had a HUGE internal discussion
about
how to proceed. Yeah, the lawyers were all over it! In the end we
decided to
not attempt to filter https traffic except by whatever was not encrypted. Basically that means web site names.
Our test case was the Playboy web site. They are available on https,
but they
do not automatically redirect http to https. If you open playboy [dot]
com
with no protocol specified, it goes over http. Our existing filter
blocked
that. However, if you open https[colon]// playboy [dot] com, it goes
straight
in. The traffic never goes over http, so the filter on the router never processes it.
Security by obscurity ... It was the best we could do without violating
our
own policies on patron privacy.
All browsers sent "server_name" [*] in there https requests. That is the domain part of the URI. So, you can identify the requested https site without decrypting (because its "lets call it a header" that includes this information) and without damaging the privacy.
[*] https://tools.ietf.org/html/rfc6066
-- LF
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
On 03/05/18 07:23, Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates.
Google, huh ;-( see below...
Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
I would add avoiding google and all google products by all means to the above list ;-)
valeri
-- LF
CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Leon Fauster wrote:
Am 05.03.2018 um 13:04 schrieb Nicolas Kovacs info@microlinux.fr:
Le 28/02/2018 à 22:23, Nicolas Kovacs a écrit :
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
After a week of trial and error, transparent HTTPS filtering works perfectly. I wrote a detailed blog article about it.
I wonder if this works with all https enabled sites? Chrome has capabilities hardcoded to check google certificates. Certificate Transparency, HTTP Public Key Pinning, CAA DNS are also supporting the end node to identify MITM. I hope that such setup will be unpractical in the near future.
About your legal requirements; Weighing is what courts daily do. So, such requirements are not asking you to destroy the integrity and confidentiality >95% of users activity. Blocking Routing, DNS, IPs, Ports are the way to go.
And how do you get a list of IPs from which data could be retrieved which the students are not supposed to see?
How is this done anyway, does the government give out a list of URLs or IPs which you are required to block? If not, what if you overlook something?
Le 06/03/2018 à 18:48, hw a écrit :
And how do you get a list of IPs from which data could be retrieved which the students are not supposed to see?
How is this done anyway, does the government give out a list of URLs or IPs which you are required to block? If not, what if you overlook something?
Here's some information.
https://dsi.ut-capitole.fr/documentations/cache/squidguard_en.html
Cheers,
Niki
Nicolas Kovacs wrote:
Le 06/03/2018 à 18:48, hw a écrit :
And how do you get a list of IPs from which data could be retrieved which the students are not supposed to see?
How is this done anyway, does the government give out a list of URLs or IPs which you are required to block? If not, what if you overlook something?
Here's some information.
https://dsi.ut-capitole.fr/documentations/cache/squidguard_en.html
The government says you must use squidguard to filter something?
Le 08/03/2018 à 11:30, hw a écrit :
The government says you must use squidguard to filter something?
The law in France (Code Pénal, article 227-24) states that a public network is not allowed to broadcast messages containing violence, pornography or any content contrary to basic human dignity, which is theoretically punishable with three years of prison or a 75.000 € fee.
So any network that offers public access is required by law to operate such filtering. This is the case for schools, town halls, public libraries, etc.
How this filtering is achieved is left to the admin for consideration.
Cheers,
Niki
On 03/08/18 06:09, Nicolas Kovacs wrote:
Le 08/03/2018 à 11:30, hw a écrit :
The government says you must use squidguard to filter something?
The law in France (Code Pénal, article 227-24) states that a public network is not allowed to broadcast messages containing violence, pornography or any content contrary to basic human dignity, which is theoretically punishable with three years of prison or a 75.000 € fee.
Yes, I was always wondering which is more advantageous for citizens, to show suicidal bombers/shooters attacks that happen(ed) in France on public news channels, of not show them as they definitely were acts of violence. The second will keep French people delusional about safety and sources of danger in France. But may be advantageous for the government which can keep pursuing its policies without results of policies (such violent attacks) questioned by public.
After having said that I have a feeling that the discussion slipped into politics on this technical list... maybe we should bring things back to pure technical questions?
Valeri
So any network that offers public access is required by law to operate such filtering. This is the case for schools, town halls, public libraries, etc.
How this filtering is achieved is left to the admin for consideration.
Cheers,
Niki
Nicolas Kovacs wrote:
Le 08/03/2018 à 11:30, hw a écrit :
The government says you must use squidguard to filter something?
The law in France (Code Pénal, article 227-24) states that a public network is not allowed to broadcast messages containing violence, pornography or any content contrary to basic human dignity, which is theoretically punishable with three years of prison or a 75.000 € fee.
So any network that offers public access is required by law to operate such filtering. This is the case for schools, town halls, public libraries, etc.
How this filtering is achieved is left to the admin for consideration.
But you aren´t broadcasting messages, or are you?
If they mean something like "make data accessible", the only way to be compliant with such a law is by not providing public access. How do you distinguish between things that are contrary to basic human dignity and things that aren´t, and how do you keep track of all existing sources of data so that you can decide whether you need to block them or not? Or who gets to decide?
For example, I could argue that tracking peoples online activities, storing their data any longer than is unavoidable for the purpose they were gathered for, gathering data about people without their explicit consent and displaying any advertisements in public is against basic human dignity. All these take away my freedom, and I want to be protected against them and require the means to stay in control of my data and thus of my life. Freedom and being able to have control of ones own life certainly falls under basic human dignity.
Someone else could argue against this. You would need to block all access to this mailing list because a judge might find it against basic human dignity that someone is saying something else.
You might also need to block all access to information about how immigrants are being treated in Europe because the way at least some of them could be against basic human dignity.
And what right does the French government have to demand such censorship? This kind of censorship is against human dignity.
Le 08/03/2018 à 17:15, hw a écrit :
But you aren´t broadcasting messages, or are you?
If they mean something like "make data accessible", the only way to be compliant with such a law is by not providing public access. How do you distinguish between things that are contrary to basic human dignity and things that aren´t, and how do you keep track of all existing sources of data so that you can decide whether you need to block them or not? Or who gets to decide?
For example, I could argue that tracking peoples online activities, storing their data any longer than is unavoidable for the purpose they were gathered for, gathering data about people without their explicit consent and displaying any advertisements in public is against basic human dignity. All these take away my freedom, and I want to be protected against them and require the means to stay in control of my data and thus of my life. Freedom and being able to have control of ones own life certainly falls under basic human dignity.
Someone else could argue against this. You would need to block all access to this mailing list because a judge might find it against basic human dignity that someone is saying something else.
You might also need to block all access to information about how immigrants are being treated in Europe because the way at least some of them could be against basic human dignity.
And what right does the French government have to demand such censorship? This kind of censorship is against human dignity.
Guys. This is the CentOS mailing list, a place to discuss technical questions... such as web content filtering.
As for the content in question, the law was mainly made for kids, to prevent them from watching porn, decapitation videos or various tutorials about growing weed or building bombs.
I doubt this is the right place to air your various beefs with humanity in general and the french government in particular. So please let's all get back on topic.
As a follow-up, I just published an article on how to combine an existing installation of Squid with SquidAnalyzer:
* https://blog.microlinux.fr/squidanalyzer-centos/
Cheers,
Niki
Am 08.03.2018 um 18:07 schrieb Nicolas Kovacs info@microlinux.fr:
Guys. This is the CentOS mailing list, a place to discuss technical questions... such as web content filtering.
Just to rephrase my implicit question: Does your setup works for the combination Chrome browser and google.com?
Or in general, what are the limits of your described setup. Just curious ...
-- LF
Le 08/03/2018 à 19:09, Leon Fauster a écrit :
Just to rephrase my implicit question: Does your setup works for the combination Chrome browser and google.com?
Or in general, what are the limits of your described setup. Just curious ...
Works perfectly.
https://blog.microlinux.fr/squid-https-centos/#chrome
Cheers,
Niki
On 2/28/2018 4:23 PM, Nicolas Kovacs wrote:
Hi,
I've been running Squid successfully on CentOS 7 (and before that on 6 and 5), and it's always been running nicely. I've been using it mostly as a transparent proxy filter in school networks.
So far, I've only been able to filter HTTP.
Do any of you do transparent HTTPS filtering ? Any suggestions, advice, caveats, do's and don'ts ?
Cheers from the snowy South of France,
Niki
I made a video on doing this yesterday on Debian. If you skip the part about the Debian install and use the CentOS Squid 3.5 packages from the binary package repo provided by Squid, you should be able to follow the same directions.