Squid cache Hierarchies:
Parent-Child Hierarchies:
Here we will define the parent child cache peering relationship. The cache will be located in two servers i.e, there will be a main cache server called as parent server, and a local cache server named as client. All the local users will query the client server for the cache and the client will pull the cache from the parent. Here the only one squid server will be connected to externel network and for auditing purpose all the traffic from other squid servers should be routed through the single proxy server.
Configuring the cache peering
Scenario:
192.168.1.0/24 Local network will query the cache client client.cache.domain.com:3128 -> which will query the parent.cache.domain.com for the cache which is not found locally.The parent cache server is the only one that connected to internet.The client uses port 3130 a UDP protocol to find out the requested cache is present in cache-parent.
Note:-
Squid supports multiple protocols for caching, CARP(cache array routing protocol). ICP, HTCP (Hyper text caching protocol), Cache-Digests etc. Make sure that the port 4827, 3130 & 3128 are opened in the firewall if the client cache is behind a firewall.
Configuring the cache-peer
In client.cache.domian.com
# vim squid.conf
--------------
cache_peer parent.cache.domian.com parent 8080 3130 default
--------------
# relaod squid
This will make the client.cache.domain.com to query parent.cache.domain.com using the cache peer port 3130 and proxy port 8080 with default settings.
Test by setting the proxy variable to the client.cache.domain.com for subnets. The client will try to pull page from the client.cache.domain.com, if the page found in client.cache.domain.com then the squid running on client.cache.domain.com will contact the parent.cache.domain.com for the cache. Check the access.log file for the request path.
Sibling Hierarchies:
Sibling-cache Relationship
This is sharing the cache among the multiple squid servers. i.e, if a server gets query for a particular cache and if it is not found in its history the server will query the sibling proxy servers for the same cache. So implementing this feature will save the bandwidth usage and time taken for downloading the page. In this case the cache will be shared among the sibling servers.
Configuration:
In cleint.cache.domain.com
# vim squid.conf
---------
cache_peer parent.cache.domian.com sibling 8080 3130 default
---------
# reload squid
In parent.cache.domain.com
# vim squid.conf
-----------
cache_peer clinet.cache.domian.com sibling 8080 3130 default
-----------
# reload squid
This will make both the servers to act as siblings. And will share the cache if that present in any one of the server before it queries the internet.
Test by setting up the proxy variables in the client and check the access.log file in both the server. We will be able to trace the query from the sibling servers here.
Limiting the squid service access:
For limiting the cache access usage.
# vim squid.conf
---------
acl all src 0.0.0.0/0.0.0.0
acl connection_limit maxconn 10
http_access deny connection_limit all
---------
# reload squid
If a user attempts to create more than 10 connection to the server the squid server will deny the new access. Test using wget.
Transparent Proxy
Local network (No proxy settings in browser)-> proxy/firewall (http accelerator and Iptables) -> Internet
Configuring the Transparent Proxy in proxy/Firewall box
Step 1:
# echo 1 > /proc/sys/net/ipv4/ip_forward
# iptables -t nat -A PREROUTING -i eth1 tcp --dport 80 -j REDIRECT --to-port 3128
(what ever the packates coming to the box with the dstination port 80 should redirect to port 3128)
Step2:
Now we have to configure the squid as transparent proxy by adding the http acceleration feature (only in 2.x series). In new version this is not needed.
# vim squid.conf
-----------
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
-----------
# relaod squid
Now this features makes the proxy to run in transparent mode.
Note:-
# squid -v
Command will show the compiled options of squid server when installed. Here we will find a key directive that is used to interact with IPtables to while enabling the transparent proxy, named "--enable-linux-netfilter". This feature makes the squid to integrate with iptables while running in transparent mode.
Showing posts with label Squid. Show all posts
Showing posts with label Squid. Show all posts
Monday, October 26, 2009
Linux Securirty Notes 14: Squid notes 7: Bandwidth Management using Delay Pools
Squid - Delay Pools Bandwidth Management
This feature is used to restrict the bandwidth usage for the user community. It has been introduced in ver 2.x
Implementing bandwidth management using delay pool
Delay Pools have 3 different class for restriction
1. class 1 pool allows to restrict the rate of bandwidth for large downloads.
This makes the restriction of rate of download of a large file.
Implementing Class1 delay pool
Steps:
# vim squid.conf
--------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # This will tell the delay pool number
delay_calss 1 1 # This defines the delay pool number 1 is a class1 type delay pool
delay_parameters 1 20000/15000 #This is delay parameter for pool number 1 which has the restore rate of 20000 when the usage hits 15000 bytes
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
--------
# relaod the squid
This will make the bandwidth usage for any one of the src when execeds the download limit of 15K, restores the rate of download to 20K/s.
Test the configuration by downloading files using wget
Limitations of class pool1:
If we have a bandwidth of 1500000 Bytes and if we configure a rate of 20000 bytes per sec then the max simultaneous connections will be 1500000/20000 = 75. This will max out the connection if we have a large number of connections from the src
2. Class 2 pool allows to set the bandwidth usage to a sustained rate
Using the class 2 pool we can overcome the Limitation of max out in class1. So here we can implement the Bandwidth in aggregate rate.
Configure the class 2 pool
If we have a Link with bandwidth of -(1.5Mb/s) 1544000 bytes/s of bandwidth
If we need to limit or set ceiling of 62500 bytes/s (500k/s) as bandwidth for the netusage
and 10% of the ceiling for each users
# vim squid.conf
----------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # Number of Pool
delay_class 1 2 # Defines the class of pool for the Pool Number 1
delay_parametes 1 62500/62500 6250/6250 # This tells to create a cieling of 500K (62500) for our bandwidth having (1.5M) with a indivigual cieling of #10% of the cieling (Any given time the users will be restricted to the 10% of the cieling bandwidth 500k)
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
----------
# reload squid
Test the rate of bandwidth using wget. Here we can see that all the rate will be restricted to 10% of the cieling from the begning for all the src. This makes the rest of the bandwidth free for usage of other purpose i.e, Out of 1.5M we have taken a cieling of .5M for internel network and we have told to squid that each request from src should get a 10% of .5M of bandwidth.
Note:-
In the class1 pool the restriction of the bandwidth was started only after meeting the max size of download. But in class 2 instead of the max download size here we defined a ceiling and user is restricted to it from the beginning.
3. Class3 pool allows to restrict the bandwidth usage for subnets
This will implement the bandwidth management with aggregate rate per subnets. i.e, the class2 pool with subnet-based ceiling
Configuring the class 3 pool
# vim squid.conf
----------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # Number of Pool
delay_class 1 3 # Defines the class of pool for the Pool Number 1
delay_parametes 1 62500/62500 31250/31250 6250/6250 # This tells to create a cieling of 500K (62500) for our bandwidth having (1.5M) with a subnets cieling of 50% of the cieling (Any given time the request from the each subnets will be restricted to the 50% of the cieling bandwidth 500k and each users in subnet will have 20% of the bandwidth rate of subnet cieling)
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
----------
# reload squid
This makes the squid to make the bandwidth usage 50% per subnet(Incase if we have 2 subnets in our network) and each user will get 20% of the subnet cieling. (i.e, out of 1.5M we have taken a cieling of .5M. the subnet cieling will share 50% of this .5M clieing(.25M). In each subnet the users will get 20%(.05M) of bandwidth of the subnet cieling (.25M)).
Delay Pool class2 with Time based ACL:
This will implement the bandwidth management only during the business hours.
Configure the Class2 pool with time restriction
# vim squid.conf
----------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
acl work_time time MTWHF 09:00-18:00
delay_pools 1 # Number of Pool
delay_class 1 2 # Defines the class of pool for the Pool Number 1
delay_parametes 1 62500/62500 25000/25000 # each user has given an average of 25000 bytes of bandwidth
delay_access 1 allow work_time # This is the access tag which tie to the acl all and work_time.
----------
# reload squid
This will make the class 2 pool to be activated only while the office hours. Test by changing the time in the squid servers after configuring the class 2 pool with time period.
This feature is used to restrict the bandwidth usage for the user community. It has been introduced in ver 2.x
Implementing bandwidth management using delay pool
Delay Pools have 3 different class for restriction
1. class 1 pool allows to restrict the rate of bandwidth for large downloads.
This makes the restriction of rate of download of a large file.
Implementing Class1 delay pool
Steps:
- Define the ACL for the delay pool
- Defines the number of delay pools (delay_pools 1)
- Define the class of delay pool (delay_calss 1 1)
- Set the parameters for the pool number (delay_parameres 1 restore_rate/max_size). Once the request exceds the max_size then the squid will make the bandwidth to the given restore_rate for a user/source(The mesurement is taken in "bytes") eg:- delay_parameters 1 20000/15000
- Enable the delay_access to include the feature (delay_access)
# vim squid.conf
--------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # This will tell the delay pool number
delay_calss 1 1 # This defines the delay pool number 1 is a class1 type delay pool
delay_parameters 1 20000/15000 #This is delay parameter for pool number 1 which has the restore rate of 20000 when the usage hits 15000 bytes
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
--------
# relaod the squid
This will make the bandwidth usage for any one of the src when execeds the download limit of 15K, restores the rate of download to 20K/s.
Test the configuration by downloading files using wget
Limitations of class pool1:
If we have a bandwidth of 1500000 Bytes and if we configure a rate of 20000 bytes per sec then the max simultaneous connections will be 1500000/20000 = 75. This will max out the connection if we have a large number of connections from the src
2. Class 2 pool allows to set the bandwidth usage to a sustained rate
Using the class 2 pool we can overcome the Limitation of max out in class1. So here we can implement the Bandwidth in aggregate rate.
Configure the class 2 pool
If we have a Link with bandwidth of -(1.5Mb/s) 1544000 bytes/s of bandwidth
If we need to limit or set ceiling of 62500 bytes/s (500k/s) as bandwidth for the netusage
and 10% of the ceiling for each users
# vim squid.conf
----------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # Number of Pool
delay_class 1 2 # Defines the class of pool for the Pool Number 1
delay_parametes 1 62500/62500 6250/6250 # This tells to create a cieling of 500K (62500) for our bandwidth having (1.5M) with a indivigual cieling of #10% of the cieling (Any given time the users will be restricted to the 10% of the cieling bandwidth 500k)
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
----------
# reload squid
Test the rate of bandwidth using wget. Here we can see that all the rate will be restricted to 10% of the cieling from the begning for all the src. This makes the rest of the bandwidth free for usage of other purpose i.e, Out of 1.5M we have taken a cieling of .5M for internel network and we have told to squid that each request from src should get a 10% of .5M of bandwidth.
Note:-
In the class1 pool the restriction of the bandwidth was started only after meeting the max size of download. But in class 2 instead of the max download size here we defined a ceiling and user is restricted to it from the beginning.
3. Class3 pool allows to restrict the bandwidth usage for subnets
This will implement the bandwidth management with aggregate rate per subnets. i.e, the class2 pool with subnet-based ceiling
Configuring the class 3 pool
# vim squid.conf
----------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
delay_pools 1 # Number of Pool
delay_class 1 3 # Defines the class of pool for the Pool Number 1
delay_parametes 1 62500/62500 31250/31250 6250/6250 # This tells to create a cieling of 500K (62500) for our bandwidth having (1.5M) with a subnets cieling of 50% of the cieling (Any given time the request from the each subnets will be restricted to the 50% of the cieling bandwidth 500k and each users in subnet will have 20% of the bandwidth rate of subnet cieling)
delay_access 1 allow bw_users # This is the access tag which tie to the acl bw_users
----------
# reload squid
This makes the squid to make the bandwidth usage 50% per subnet(Incase if we have 2 subnets in our network) and each user will get 20% of the subnet cieling. (i.e, out of 1.5M we have taken a cieling of .5M. the subnet cieling will share 50% of this .5M clieing(.25M). In each subnet the users will get 20%(.05M) of bandwidth of the subnet cieling (.25M)).
Delay Pool class2 with Time based ACL:
This will implement the bandwidth management only during the business hours.
Configure the Class2 pool with time restriction
# vim squid.conf
----------
acl bw_users src 192.168.1.0/24 # The acl defined for the Network
acl work_time time MTWHF 09:00-18:00
delay_pools 1 # Number of Pool
delay_class 1 2 # Defines the class of pool for the Pool Number 1
delay_parametes 1 62500/62500 25000/25000 # each user has given an average of 25000 bytes of bandwidth
delay_access 1 allow work_time # This is the access tag which tie to the acl all and work_time.
----------
# reload squid
This will make the class 2 pool to be activated only while the office hours. Test by changing the time in the squid servers after configuring the class 2 pool with time period.
Linux Securirty Notes 14: Squid notes 6: DNS Round Robin
Squid - Force proxy usage
Here will explain how configure all the outbound traffic (to internet) to only route through proxy.
Step 1:
We will disable all the outbound access to the internet from local network except the squid server in the firewall.
And explicitly allow the squid server outbound traffic to port 80/443.
Step 2:
Now configure the Proxy server. Setup the proxy variable in the client (Browser settings as well the http_proxy variable settings). And check the access.log file. This is the recommended setup so that all the outbound traffic from our network will be monitored and controlled using squid.
Squid Load balancing - Using DNS Round Robin:
This configuration will make squid to run in a load-balance fashion. The DNS ROUND ROBIN will take care of routing request from source to all the squid servers configured with DNS Round Robin. Here the cache can be shared among the squid servers. This make the same cache access for all the requests on load balancing.
To obtain this we will configure two squid servers for our network
Network: 192.168.1.0/24
Squidserver1: 192.168.1.1
Squidserver2: 192.168.1.254
Step 1:
Install squid on both the servers and configure both the servers with the same business rule. Start squid servers. Make sure that the squid is running on both the servers.
Step 2:
Configure the DNS for ROUND ROBIN:
Here we will make entry of both the squid servers in DNS. Here we have to make a same "A" record pointing to different IP address of the two squid servers
Eg: cache.domain.com A 192.168.1.1
cache.domain.com A 192.168.1.254
So when we make a query to the DNS server the DNS server will reply both the IPs, subsequent reply will alter the position of the A record. This means If the result for the first query for the FQDN returns "cache.domain.com A 192.168.1.1". Then the subsequent query will yield the result of "cache.domain.com A 192.168.1.254". Thus the Load will be balanced equally to both of the servers.
Check the working of DNS ROUND ROBIN using
# dig cache.domain.com
&
# nslookup cache.domain.com
Step 3:
Configure the browsers and the variables in the client for the squid server "cache.domain.com". and try to brows.
Here will explain how configure all the outbound traffic (to internet) to only route through proxy.
Step 1:
We will disable all the outbound access to the internet from local network except the squid server in the firewall.
And explicitly allow the squid server outbound traffic to port 80/443.
Step 2:
Now configure the Proxy server. Setup the proxy variable in the client (Browser settings as well the http_proxy variable settings). And check the access.log file. This is the recommended setup so that all the outbound traffic from our network will be monitored and controlled using squid.
Squid Load balancing - Using DNS Round Robin:
This configuration will make squid to run in a load-balance fashion. The DNS ROUND ROBIN will take care of routing request from source to all the squid servers configured with DNS Round Robin. Here the cache can be shared among the squid servers. This make the same cache access for all the requests on load balancing.
To obtain this we will configure two squid servers for our network
Network: 192.168.1.0/24
Squidserver1: 192.168.1.1
Squidserver2: 192.168.1.254
Step 1:
Install squid on both the servers and configure both the servers with the same business rule. Start squid servers. Make sure that the squid is running on both the servers.
Step 2:
Configure the DNS for ROUND ROBIN:
Here we will make entry of both the squid servers in DNS. Here we have to make a same "A" record pointing to different IP address of the two squid servers
Eg: cache.domain.com A 192.168.1.1
cache.domain.com A 192.168.1.254
So when we make a query to the DNS server the DNS server will reply both the IPs, subsequent reply will alter the position of the A record. This means If the result for the first query for the FQDN returns "cache.domain.com A 192.168.1.1". Then the subsequent query will yield the result of "cache.domain.com A 192.168.1.254". Thus the Load will be balanced equally to both of the servers.
Check the working of DNS ROUND ROBIN using
# dig cache.domain.com
&
# nslookup cache.domain.com
Step 3:
Configure the browsers and the variables in the client for the squid server "cache.domain.com". and try to brows.
Note:-
Keep in mind about the DNS cache used by client.
Friday, October 23, 2009
Linux Securirty Notes 14: Squid notes 5: ACL - Cache Management
Squid ACL Cache Management:
Here we can will have a look in to the squid cache management. We can make squid to start caching only a certain number of websites, or u can turn off all the caching ability or even we can run with full caching ability of squid.
Setting up squid as a non-caching service:
In this mode the squid will not cache any of the request. And will be running only for logging the access and honouring the ACLs defined according to the company rule.
Disable caching for request from all the source address
# vim squid.conf
------------
acl noncaching_hosts src 0.0.0.0/0.0.0.0
no_cache deny noncaching_hosts
------------
# reload squid
no_cache tag is used to configure the caching management in ACL. Here with the deny option no_cache makes all the request from the given src address to run in non cache-mode. So monitor the access.log file for any "HITs" to verify. "HITS" are denoted for the caching sites and "MISS" for non caching sites. So from now itself we will get only the MISS in access.log.
Disable caching for specific sites in Internet/Intranet:
# vim squid.cond
---------
acl no_cache_sites dstdomain .domain.com
no_cache deny no_cache_sites
---------
# reload squid
This will make squid to disable caching only for the given domain. But all other domains will be cached. A number of domains can be defined to a file as well.
Disable caching for dynamic sites (sites which runs on .php/.asp/.pl/.cgi/.jsp etc)
# vim squid.conf
---------
acl no_cache_dynamic_sites url_regex "/etc/squid/no_cache_sites"
no_cache deny no_cache_dynamic sites
---------
# vim /etc/squid/no_cache_sites
---------
\.php$
\.cgi$
\.jsp
\.pl$
\.asp$
---------
# reload squid
This will make squid to not cache the sites with has the expression listed in the file.
Caching based on the User access using IP address:
End result should be no cache activity for the request from Admins(192.168.100.0) and executive (192.168.200.3)But every one else should be cached.
# vim squid.conf
--------
acl no_cache_users src 192.168.100.0/24 192.168.200.3
no_cache deny no_cache_users
--------
# reload squid
Here all the request from the network 192.168.100.0/24 and IP address 192.168.200.3 will not be cached. But all other request will be cached.
Here we can will have a look in to the squid cache management. We can make squid to start caching only a certain number of websites, or u can turn off all the caching ability or even we can run with full caching ability of squid.
Setting up squid as a non-caching service:
In this mode the squid will not cache any of the request. And will be running only for logging the access and honouring the ACLs defined according to the company rule.
Disable caching for request from all the source address
# vim squid.conf
------------
acl noncaching_hosts src 0.0.0.0/0.0.0.0
no_cache deny noncaching_hosts
------------
# reload squid
no_cache tag is used to configure the caching management in ACL. Here with the deny option no_cache makes all the request from the given src address to run in non cache-mode. So monitor the access.log file for any "HITs" to verify. "HITS" are denoted for the caching sites and "MISS" for non caching sites. So from now itself we will get only the MISS in access.log.
Disable caching for specific sites in Internet/Intranet:
# vim squid.cond
---------
acl no_cache_sites dstdomain .domain.com
no_cache deny no_cache_sites
---------
# reload squid
This will make squid to disable caching only for the given domain. But all other domains will be cached. A number of domains can be defined to a file as well.
Disable caching for dynamic sites (sites which runs on .php/.asp/.pl/.cgi/.jsp etc)
# vim squid.conf
---------
acl no_cache_dynamic_sites url_regex "/etc/squid/no_cache_sites"
no_cache deny no_cache_dynamic sites
---------
# vim /etc/squid/no_cache_sites
---------
\.php$
\.cgi$
\.jsp
\.pl$
\.asp$
---------
# reload squid
This will make squid to not cache the sites with has the expression listed in the file.
Caching based on the User access using IP address:
End result should be no cache activity for the request from Admins(192.168.100.0) and executive (192.168.200.3)But every one else should be cached.
# vim squid.conf
--------
acl no_cache_users src 192.168.100.0/24 192.168.200.3
no_cache deny no_cache_users
--------
# reload squid
Here all the request from the network 192.168.100.0/24 and IP address 192.168.200.3 will not be cached. But all other request will be cached.
Linux Securirty Notes 14: Squid notes 4: ACLs 2
This summary is not available. Please
click here to view the post.
Linux Securirty Notes 14: Squid notes 4: ACLs 1
Squid ACLs
The importance of access controls cannot be overstated. It is important to have a good understanding of how to control who uses squid. When access controls are created you will use two components. The first is the acl which defines, clients, IP Addresses, hostnames, origin port numbers and request methods. Once these are created they are combined with rules for the acls.
Syntax:
1. Define ACL
acl - a unique_name - type(any criterea such as port/src/dst/dstdomain/srcdomain/time_of_day etc)- decission_string
2. Apply ACL using criteria
http_access - permission(allow|deny) - acl unique name [! means negative rule]
Eg:-
acl Safe_port port 80
http_access deny !Safe_ports (denies all the destination port other than port 80)
The acl in the config file is matched by squid from upper to bottom and executes the first found rule for acl.
SCENARIOS BASED ON ACLs
Restricting a single host (192.168.10.57) using ACL
# vim squid.conf
--------
acl badhost src 192.168.10.57
http_access allow !badhost
or
http_access deny badhost
--------
# reload squid
Restricting Multiple hosts
# vim squid.conf
--------
acl badhosts src 192.168.10.50 192.168.10.51 192.168.10.52 192.168.10.53
http_access allow !badhosts
# or use the following
http_access deny badhosts
--------
# reload squid
ACLs Lists
Usually ACLs can be defined in 2 ways.
1. redefining the same rules on other lines
eg:- acl Safe_ports are defined in such a way
----------
acl Safe_ports port 80
acl Safe_ports port 443
acl Safe_ports port 70
http_access deny ! Safe_ports
----------
2. Defining the list to a single file.
# vim /etc/squid/badhosts
-------
192.168.1.50
192.168.1.51
192.168.1.52
192.168.1.53
-------
# vim squid.conf
-------
acl badhosts src "/etc/squid/badhosts"
http_access deny badhosts
-------
# reload squid
Here we made the acl to lookup in the text file for parsing the request.
Define ACL based on TIME:
Squid recognizes using the follwoing syntax
Day of the week (DOW)
S = Sunday
M = Monday
T = Tuesday
W = Wednesday
H = tHursday
F = Friday
A = sAturday
Hours and Minutes
hh:mm-hh:mm (We have to use the 24Hrs time format)
Restrict access between working/buisness hours
syntax:
acl work_hours time [days_of_week] [hours_of_day]
We can illustrate it with the following examples.
To deny access to squid between 9.30AM to 5PM everyday we can use the following syntax
#vim squid.conf
-----
acl work_time time 09:30-17:00
http_access deny work_time
-----
# reload squid
This will deny all the request to squid between the time 9.30 to 5.00
To deny access to squid between 9.30AM to 12:20PM and 2:00PM to 6:00PM everyday we can use the following syntax
#vim squid.conf
-----
acl work_time time 09:30-12:20
http_access deny work_time
acl work_time2 time 14:00-18:00
http_access deny work_time2
-----
# reload squid
This will deny the internet access in the given time period 9.30AM to 12:20PM and 2:00PM to 6:00PM everyday. If we need to bypass this rule to anyother users, define a rule that permits the access above this ACL.
To deny access to squid between 9.30AM to 5PM on Monday Wednesday Thursday Friday and Saturday we can use the following syntax
#vim squid.conf
-----
acl work_time time MWHFA 09:30-17:00
http_access deny work_time
-----
# reload squid
This will deny access on MWHFA weekdays between 9.30AM to 5PM
Defining the access to destination domains using ACL.
Two ways can be used to obtain the result
# vim squid.conf
------
acl time_waste_sites dstdomain .yahoo.com
acl time_waste_sites dstdomain .msn.com
acl time_waste_sites dstdomain .orkut.com
acl time_waste_sites dstdomain .ebay.com
http_access deny time_waste_sites
------
# reload squid
This will deny all the website of the domains defined in the squid.conf. eg:- mail.yahoo.com, app.yahoo.com ebay.com, test.ebay.com etc.
2. Deny destination domains By creating the list of destination files
# vim /etc/squid/time_waste_domains.txt
-----
.msn.com
.orkut.com
.ebay.com
-----
# vim squid.conf
-----
acl time_waste dstdomain "/etc/squid/time_waste_domains.txt"
http_access deny time_waste
-----
# reload squid
ACL ANDED RULES.
This is used to combine the ACL rules using the AND logic. For example this is use full for defining the rule to deny the access to certain websites during business hours.
Denying Certain Sites At Given Time using ACL ANDing Rule:
# vim squid.conf
-----------
acl time_waste time MWHFA 09:30-17:00
acl waste_domain dstdomain "/etc/squid/time_waste_domains.txt"
http_access deny time_waste waste_domain
-----------
# reload squid
This will deny the access to the sites defined in the file /etc/squid/time_waste_domains.txt during the time 09:30-17:00 on DOW M,W,H,F & A
Deny certain sites At given time for a number of users using ACL anding rule:
# vim squid.conf
-----------
acl lazy_workers src 192.168.233.0/24
acl time_waste time MWHFA 09:30-17:00
acl waste_domain dstdomain "/etc/squid/time_waste_domains.txt"
http_access deny lazy_workers time_waste waste_domain
-----------
# reload squid
This will deny the access to the sites defined in the file /etc/squid/time_waste_domains.txt during the time 09:30-17:00 on DOW M,W,H,F & A for the hosts having given IP range.
Anding Using Criteria defnition:
Scenario:
We have to create a rule on the casual websites access during the business hours.
In the above scenario we have to consider certain criterias
1. Work Hours = MTWHF 9:00-18:00
2. Source Subnets = 192.168.1.0/24
3. Permit access to search domains = google.com should allow
So now we shall begin to define the ACL to meet above requirement.
# vim squid.conf
----------
#Acl to allow the search domains
acl work_sites dstdomain .google.com
http_access allow work_sites
# ACL to deny all the sites other than work_sites for lazy_guys at working hours in week days
acl lazy_guys src 192.168.1.0/24
acl work_hours time MTWHF 09:00-18:00
http_access deny lazy_guys work_hours
----------
# reload squid
This will only allow the google.com for the lazy_guys at week days from 9:00 to 6:00 pm. But the access to other sites will be given, for the time which is not defined here (non office hours and week ends)
Note:-
The ANDed rules in the ACLs will be working only if both the criteria matches, i.e, the request from the source IP (192.168.1.0/24) at defined time (Mon-Fri 9:00 to 6:00 pm). If this not matches then the default rule will be applied.
The importance of access controls cannot be overstated. It is important to have a good understanding of how to control who uses squid. When access controls are created you will use two components. The first is the acl which defines, clients, IP Addresses, hostnames, origin port numbers and request methods. Once these are created they are combined with rules for the acls.
Syntax:
1. Define ACL
acl - a unique_name - type(any criterea such as port/src/dst/dstdomain/srcdomain/time_of_day etc)- decission_string
2. Apply ACL using criteria
http_access - permission(allow|deny) - acl unique name [! means negative rule]
Eg:-
acl Safe_port port 80
http_access deny !Safe_ports (denies all the destination port other than port 80)
The acl in the config file is matched by squid from upper to bottom and executes the first found rule for acl.
SCENARIOS BASED ON ACLs
Restricting a single host (192.168.10.57) using ACL
# vim squid.conf
--------
acl badhost src 192.168.10.57
http_access allow !badhost
or
http_access deny badhost
--------
# reload squid
Restricting Multiple hosts
# vim squid.conf
--------
acl badhosts src 192.168.10.50 192.168.10.51 192.168.10.52 192.168.10.53
http_access allow !badhosts
# or use the following
http_access deny badhosts
--------
# reload squid
ACLs Lists
Usually ACLs can be defined in 2 ways.
1. redefining the same rules on other lines
eg:- acl Safe_ports are defined in such a way
----------
acl Safe_ports port 80
acl Safe_ports port 443
acl Safe_ports port 70
http_access deny ! Safe_ports
----------
2. Defining the list to a single file.
# vim /etc/squid/badhosts
-------
192.168.1.50
192.168.1.51
192.168.1.52
192.168.1.53
-------
# vim squid.conf
-------
acl badhosts src "/etc/squid/badhosts"
http_access deny badhosts
-------
# reload squid
Here we made the acl to lookup in the text file for parsing the request.
Define ACL based on TIME:
Squid recognizes using the follwoing syntax
Day of the week (DOW)
S = Sunday
M = Monday
T = Tuesday
W = Wednesday
H = tHursday
F = Friday
A = sAturday
Hours and Minutes
hh:mm-hh:mm (We have to use the 24Hrs time format)
Restrict access between working/buisness hours
syntax:
acl work_hours time [days_of_week] [hours_of_day]
We can illustrate it with the following examples.
To deny access to squid between 9.30AM to 5PM everyday we can use the following syntax
#vim squid.conf
-----
acl work_time time 09:30-17:00
http_access deny work_time
-----
# reload squid
This will deny all the request to squid between the time 9.30 to 5.00
To deny access to squid between 9.30AM to 12:20PM and 2:00PM to 6:00PM everyday we can use the following syntax
#vim squid.conf
-----
acl work_time time 09:30-12:20
http_access deny work_time
acl work_time2 time 14:00-18:00
http_access deny work_time2
-----
# reload squid
This will deny the internet access in the given time period 9.30AM to 12:20PM and 2:00PM to 6:00PM everyday. If we need to bypass this rule to anyother users, define a rule that permits the access above this ACL.
To deny access to squid between 9.30AM to 5PM on Monday Wednesday Thursday Friday and Saturday we can use the following syntax
#vim squid.conf
-----
acl work_time time MWHFA 09:30-17:00
http_access deny work_time
-----
# reload squid
This will deny access on MWHFA weekdays between 9.30AM to 5PM
Defining the access to destination domains using ACL.
Two ways can be used to obtain the result
- By creating the rules inside the squid.conf
- By creating a List of destination domains in text file
# vim squid.conf
------
acl time_waste_sites dstdomain .yahoo.com
acl time_waste_sites dstdomain .msn.com
acl time_waste_sites dstdomain .orkut.com
acl time_waste_sites dstdomain .ebay.com
http_access deny time_waste_sites
------
# reload squid
This will deny all the website of the domains defined in the squid.conf. eg:- mail.yahoo.com, app.yahoo.com ebay.com, test.ebay.com etc.
2. Deny destination domains By creating the list of destination files
# vim /etc/squid/time_waste_domains.txt
-----
.msn.com
.orkut.com
.ebay.com
-----
# vim squid.conf
-----
acl time_waste dstdomain "/etc/squid/time_waste_domains.txt"
http_access deny time_waste
-----
# reload squid
ACL ANDED RULES.
This is used to combine the ACL rules using the AND logic. For example this is use full for defining the rule to deny the access to certain websites during business hours.
Denying Certain Sites At Given Time using ACL ANDing Rule:
# vim squid.conf
-----------
acl time_waste time MWHFA 09:30-17:00
acl waste_domain dstdomain "/etc/squid/time_waste_domains.txt"
http_access deny time_waste waste_domain
-----------
# reload squid
This will deny the access to the sites defined in the file /etc/squid/time_waste_domains.txt during the time 09:30-17:00 on DOW M,W,H,F & A
Deny certain sites At given time for a number of users using ACL anding rule:
# vim squid.conf
-----------
acl lazy_workers src 192.168.233.0/24
acl time_waste time MWHFA 09:30-17:00
acl waste_domain dstdomain "/etc/squid/time_waste_domains.txt"
http_access deny lazy_workers time_waste waste_domain
-----------
# reload squid
This will deny the access to the sites defined in the file /etc/squid/time_waste_domains.txt during the time 09:30-17:00 on DOW M,W,H,F & A for the hosts having given IP range.
Anding Using Criteria defnition:
Scenario:
We have to create a rule on the casual websites access during the business hours.
In the above scenario we have to consider certain criterias
1. Work Hours = MTWHF 9:00-18:00
2. Source Subnets = 192.168.1.0/24
3. Permit access to search domains = google.com should allow
So now we shall begin to define the ACL to meet above requirement.
# vim squid.conf
----------
#Acl to allow the search domains
acl work_sites dstdomain .google.com
http_access allow work_sites
# ACL to deny all the sites other than work_sites for lazy_guys at working hours in week days
acl lazy_guys src 192.168.1.0/24
acl work_hours time MTWHF 09:00-18:00
http_access deny lazy_guys work_hours
----------
# reload squid
This will only allow the google.com for the lazy_guys at week days from 9:00 to 6:00 pm. But the access to other sites will be given, for the time which is not defined here (non office hours and week ends)
Note:-
The ANDed rules in the ACLs will be working only if both the criteria matches, i.e, the request from the source IP (192.168.1.0/24) at defined time (Mon-Fri 9:00 to 6:00 pm). If this not matches then the default rule will be applied.
Wednesday, October 21, 2009
Linux Securirty Notes 14: Squid notes 3: Cachemngr & Port configuration
Squid implementing cachemgr.cgi
This script will return the information of squid process/performance such as memory usage, cpu usage, number of HITS & MISS etc.. The tool cachemgr.cgi is included in the squid rpm package.
# rpm -ql squid | grep cachemgr.cgi
This will show the installation path of the cgi script.
The script is processed by apache so now we have to find out where the apache process the cgi scripts.
# grep -i scriptalias httpd.conf
-----
ScrriptAlias /cgi-bin/ "/var/www/cgi-bin/"
-----
So we have to place the cachemgr.cgi in to "/var/www/cgi-bin/" directory to process by apache.
# cp cachemgr.cgi /var/www/cgi-bin/
# reload apache
Open the web browser and navigate to http://localhost/cgi-bin/cachemgr.cgi
This will open the page which ask the information of squid server and the cache manager credentials. Default credentials is null. continue to explore more in the scripts. A menu will be available with all the system utilization report tools. Brows through each options to gather the information of the squid process and
statistics.
Changing the default port of Squid
# vim /etc/squid/squid.conf
-----
http_port 8080
-----
# reload squid
This changes the port to 8080. "https_port" is an another derivative used as a accelerator to speed up the back end SSL based servers.
squid - Safe ports
This is the list of the ports to which squid will make the destination connection to. The safe ports are defined using the ACL
Below is a list of safe port configuration in squid
#vim squid.conf
----------
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443 563
http_access deny !Safe_port
----------
The list of ports will be allowed by squid, all other ports will be denied, with the error html file to the client stating that"access denied viz ACL".For enable or disable any ports that squid need to server we can use this safe port ACL. In short we can restrict the access of destination port.
This script will return the information of squid process/performance such as memory usage, cpu usage, number of HITS & MISS etc.. The tool cachemgr.cgi is included in the squid rpm package.
# rpm -ql squid | grep cachemgr.cgi
This will show the installation path of the cgi script.
The script is processed by apache so now we have to find out where the apache process the cgi scripts.
# grep -i scriptalias httpd.conf
-----
ScrriptAlias /cgi-bin/ "/var/www/cgi-bin/"
-----
So we have to place the cachemgr.cgi in to "/var/www/cgi-bin/" directory to process by apache.
# cp cachemgr.cgi /var/www/cgi-bin/
# reload apache
Open the web browser and navigate to http://localhost/cgi-bin/cachemgr.cgi
This will open the page which ask the information of squid server and the cache manager credentials. Default credentials is null. continue to explore more in the scripts. A menu will be available with all the system utilization report tools. Brows through each options to gather the information of the squid process and
statistics.
Changing the default port of Squid
# vim /etc/squid/squid.conf
-----
http_port 8080
-----
# reload squid
This changes the port to 8080. "https_port" is an another derivative used as a accelerator to speed up the back end SSL based servers.
squid - Safe ports
This is the list of the ports to which squid will make the destination connection to. The safe ports are defined using the ACL
Below is a list of safe port configuration in squid
#vim squid.conf
----------
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443 563
http_access deny !Safe_port
----------
The list of ports will be allowed by squid, all other ports will be denied, with the error html file to the client stating that"access denied viz ACL".For enable or disable any ports that squid need to server we can use this safe port ACL. In short we can restrict the access of destination port.
Tuesday, October 20, 2009
Linux Securirty Notes 14: Squid notes 2: Log analysis
Squid Logs
By default squid logs to /var/log/squid/ directory. We will have a detailed look in to each and every files inside this directory.
squid.out
This contains the details about the caching (Initializing RAM & Swap) that happened while starting the squid. Only the basic system info.
access.log
Registers the caching activities, HIT or MISS and user access logs etc. This is the main log file that registers the user activities and everything about the request received by squid server.
Fields in access.log:
-----------
11298722788.699 15098 192.168.1.1 TCP_MISS/200 2048 GET http://www.yahoo.com/ - DIRECT/64.20.165.254 text/html
-----------
field1: Time stamp(unix epoch time format(milliseconds from jan 1970))
field2: Elapsed_time of page/object delivery
field3: Remote host
field4: Code/Status [TCP_MISS/200 (squid actions/http-status)The status error is same as the http error codes]
field5: Bytes delivered to the client
field6: Method used to retrieve the page.
field7: The destination URL
field8: IDENT identification. This will tell which user is running the program and what client is running.
field9: Hierarchy - This tells, what the squid have done to return the pages (DIRECT/64.20.165.254).
field10: Mime type
Note :-
Squid also supports the common log formats (CLF). This will record less details.
To enable the common type of logging by squid
#edit /etc/squid/squid.conf
-----
emulate_httpd_log on
-----
This will make the squid to log through the common log format. This will be usefull if we use any third party tool to parse the squid logs.
cache.log
Stores errors and debugging information of the squid daemons. i.e, system information logs
store.log
This maintains the squid cache content logs. i.e, details about the stored objects in the cache.
Fields:
----------
22113499023.433 RELEASE 00 FFFFFFFF 89037DHH29739DHD927AC0389 304 112399483 -1 -1 unknown -1/0 GET http://www.yahoo.com/image.jpg
----------
Field1: Time stamp(unix epoch time format(milliseconds from jan 1970)) `date +%s`
Field2: Action done ne cache (Release,create,swapout(saved from the swap to disk),swapin (moved to RAM))
Field3: Folder number of the cache (/var/spool/squid will contain many directories that stores the cache. This filed refers to it)
Filed4/5: File name inside the folder that denoted in the field 3
Field6: HTTP status, this follows the standard http errors.
Field7: Date that included in the header of the file that send to the client.
Field8: The last modified time stamp of the file that served to the client
Field9: The expiration time of the contents
Field10: Mime Type
Filed11: Size of the content (content_length/actual size)
Field12: Method used to get the destination
Field13: The exact url that cached.
Log Analysis Using WebAlizer using Common Log Format CLF.
To configure the WebAlizer we need to make squid to log in Common Log Format
#vim squid.conf
-----
emulate_httpd_log on
-----
#service squid restart
This will make squid to start logging in CLF to /var/log/squid/access.log
Installing the webalizer.
The default installation of the RHEL includes the package webalizer, if not install using yum
# rpm -qa |grep -i webalizer
webalizer-ver.xx.xx
Configure the WebAlizer to get the log parsed from squid.(WebAlyzer will parse the squid native logs too)
# vim /etc/webalizer.conf
------------
#change
LogFile /var/log/httpd/access_log
#to
LogFile /var/log/squid/access.log
HostName mysquidserver
------------
Now run the webalizer
# webalizer -c /etc/webalizer.conf
This will process the squid.log file and will send the output into the output folder defined in the webalizer.conf file. This folder contains the index.html file which can be served using a webserver
# Now configure and start the webserver to serve the html page created by the webalizer.
Configure the webalizer to use the Squid Native log format
# comment the option emulate_httpd_log on in squid.conf
# restart the squid service to start logging in squid native log format
Now configure & exicute the webalizer
# vim webalizer.conf
-----
LogType squid
-----
# webalizer -c /etc/webalizer.conf
This will make webalizer to start parsing the squid native logs and generated the .html file. Now navigate through the record using webserver.
By default squid logs to /var/log/squid/ directory. We will have a detailed look in to each and every files inside this directory.
squid.out
This contains the details about the caching (Initializing RAM & Swap) that happened while starting the squid. Only the basic system info.
access.log
Registers the caching activities, HIT or MISS and user access logs etc. This is the main log file that registers the user activities and everything about the request received by squid server.
Fields in access.log:
-----------
11298722788.699 15098 192.168.1.1 TCP_MISS/200 2048 GET http://www.yahoo.com/ - DIRECT/64.20.165.254 text/html
-----------
field1: Time stamp(unix epoch time format(milliseconds from jan 1970))
field2: Elapsed_time of page/object delivery
field3: Remote host
field4: Code/Status [TCP_MISS/200 (squid actions/http-status)The status error is same as the http error codes]
field5: Bytes delivered to the client
field6: Method used to retrieve the page.
field7: The destination URL
field8: IDENT identification. This will tell which user is running the program and what client is running.
field9: Hierarchy - This tells, what the squid have done to return the pages (DIRECT/64.20.165.254).
field10: Mime type
Note :-
Squid also supports the common log formats (CLF). This will record less details.
To enable the common type of logging by squid
#edit /etc/squid/squid.conf
-----
emulate_httpd_log on
-----
This will make the squid to log through the common log format. This will be usefull if we use any third party tool to parse the squid logs.
cache.log
Stores errors and debugging information of the squid daemons. i.e, system information logs
store.log
This maintains the squid cache content logs. i.e, details about the stored objects in the cache.
Fields:
----------
22113499023.433 RELEASE 00 FFFFFFFF 89037DHH29739DHD927AC0389 304 112399483 -1 -1 unknown -1/0 GET http://www.yahoo.com/image.jpg
----------
Field1: Time stamp(unix epoch time format(milliseconds from jan 1970)) `date +%s`
Field2: Action done ne cache (Release,create,swapout(saved from the swap to disk),swapin (moved to RAM))
Field3: Folder number of the cache (/var/spool/squid will contain many directories that stores the cache. This filed refers to it)
Filed4/5: File name inside the folder that denoted in the field 3
Field6: HTTP status, this follows the standard http errors.
Field7: Date that included in the header of the file that send to the client.
Field8: The last modified time stamp of the file that served to the client
Field9: The expiration time of the contents
Field10: Mime Type
Filed11: Size of the content (content_length/actual size)
Field12: Method used to get the destination
Field13: The exact url that cached.
Log Analysis Using WebAlizer using Common Log Format CLF.
To configure the WebAlizer we need to make squid to log in Common Log Format
#vim squid.conf
-----
emulate_httpd_log on
-----
#service squid restart
This will make squid to start logging in CLF to /var/log/squid/access.log
Installing the webalizer.
The default installation of the RHEL includes the package webalizer, if not install using yum
# rpm -qa |grep -i webalizer
webalizer-ver.xx.xx
Configure the WebAlizer to get the log parsed from squid.(WebAlyzer will parse the squid native logs too)
# vim /etc/webalizer.conf
------------
#change
LogFile /var/log/httpd/access_log
#to
LogFile /var/log/squid/access.log
HostName mysquidserver
------------
Now run the webalizer
# webalizer -c /etc/webalizer.conf
This will process the squid.log file and will send the output into the output folder defined in the webalizer.conf file. This folder contains the index.html file which can be served using a webserver
# Now configure and start the webserver to serve the html page created by the webalizer.
Configure the webalizer to use the Squid Native log format
# comment the option emulate_httpd_log on in squid.conf
# restart the squid service to start logging in squid native log format
Now configure & exicute the webalizer
# vim webalizer.conf
-----
LogType squid
-----
# webalizer -c /etc/webalizer.conf
This will make webalizer to start parsing the squid native logs and generated the .html file. Now navigate through the record using webserver.
Linux Securirty Notes 14: Squid notes 1: Introduction
SQUID
Intro
Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator. It runs on most available operating systems,
including Windows and is licensed under the GNU GPL.
Thousands of web-sites around the Internet use Squid to drastically increase their content delivery. Squid can reduce your server load and improve delivery speeds to clients. Squid can also be used to deliver content from around the world - copying only the content being used, rather than inefficiently copying everything. Finally, Squid's advanced content routing configuration allows you to build content clusters to route and load balance requests via a variety of web servers.
" [The Squid systems] are currently running at a hit-rate (the web page that served from cache) of approximately 75%, effectively quadrupling the capacity of the Apache servers behind
them. This is particularly noticeable when a large surge of traffic arrives directed to a particular page via a web link from another site, as the caching efficiency for that page will be nearly 100%. "
The normal setup is caching the contents of an unlimited number of webservers for a limited number of clients. Another setup is “reverse proxy” or “webserver acceleration” (using http_port 80 accel vhost). In this mode, the cache serves an unlimited number of clients for a limited number of—or just one—web servers.
Initialising SQUID
1. install the squid
Squid can be installed by rpm. Squid uses lot of system resources to server the request. So best configuration on servers will defnitly improve the squid service.It uses the cache so more the RAM better the performance.
For eg:- for a squid server of 100 users
need a /var space of 150G
RAM - the more the better
Even if you install the squid in lower configuration, to server a large communit squid will still server by optimising the current Hardware config. But for better result it is recommanded with higher configuration.
/usr/sbin/squid is the daemon installed by the package
/usr/sbin/squidclient is the binary installed, which is used to make query to the squid server to check whether the cache is available in the local or remote squid server
Note:-
Squid has a master process that will spawn the childs to handle the process. it binds to the default port 3128. The child proces is binding to the port 3128 not the master process. The same squid process is also bind to the udp port 3130 which used distribute the load for other squid servers, which is a peer to peer communication to balance the load.
2. configure the squid
The client makes the request to the proxy server and the proxy server will contact the detination, caches the page and serves content. This cache will be used for later serve .The Access control should be configured to get it working. modify /etc/squid/squid.conf file to configure the squid
3. start the squid service
The rpm installation can be started using service and can be configured using chkconfig command
CONFIGURATION OF SQUID
Initial/simple configuration:
Here the acl is configured for the internal and the permission is given by the operator http_access. The squid operates from the top to bottom of the configuration file. so once if the search pattern is found squid will terminate the search in the config file and will start processing.
Testing the squid by squidclient:
Applying the proxy settings for textbased http client/shell based tools.
Steps to enabling the proxy in wget/lftp/lynx
Step 1:
Step 2:
now start the clients
Intro
Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator. It runs on most available operating systems,
including Windows and is licensed under the GNU GPL.
Thousands of web-sites around the Internet use Squid to drastically increase their content delivery. Squid can reduce your server load and improve delivery speeds to clients. Squid can also be used to deliver content from around the world - copying only the content being used, rather than inefficiently copying everything. Finally, Squid's advanced content routing configuration allows you to build content clusters to route and load balance requests via a variety of web servers.
" [The Squid systems] are currently running at a hit-rate (the web page that served from cache) of approximately 75%, effectively quadrupling the capacity of the Apache servers behind
them. This is particularly noticeable when a large surge of traffic arrives directed to a particular page via a web link from another site, as the caching efficiency for that page will be nearly 100%. "
The normal setup is caching the contents of an unlimited number of webservers for a limited number of clients. Another setup is “reverse proxy” or “webserver acceleration” (using http_port 80 accel vhost). In this mode, the cache serves an unlimited number of clients for a limited number of—or just one—web servers.
Initialising SQUID
- install the squid
- configure the squid
- start the squid service
1. install the squid
Squid can be installed by rpm. Squid uses lot of system resources to server the request. So best configuration on servers will defnitly improve the squid service.It uses the cache so more the RAM better the performance.
For eg:- for a squid server of 100 users
need a /var space of 150G
RAM - the more the better
Even if you install the squid in lower configuration, to server a large communit squid will still server by optimising the current Hardware config. But for better result it is recommanded with higher configuration.
# rpm -qpl squid.ver.rpm
This will tell us that what are the changes that will be done to the system after the installation of the squid package.# rpm -ivh squid-ver.rpm
This will install the squid package./usr/sbin/squid is the daemon installed by the package
/usr/sbin/squidclient is the binary installed, which is used to make query to the squid server to check whether the cache is available in the local or remote squid server
Note:-
Squid has a master process that will spawn the childs to handle the process. it binds to the default port 3128. The child proces is binding to the port 3128 not the master process. The same squid process is also bind to the udp port 3130 which used distribute the load for other squid servers, which is a peer to peer communication to balance the load.
2. configure the squid
The client makes the request to the proxy server and the proxy server will contact the detination, caches the page and serves content. This cache will be used for later serve .The Access control should be configured to get it working. modify /etc/squid/squid.conf file to configure the squid
3. start the squid service
The rpm installation can be started using service and can be configured using chkconfig command
CONFIGURATION OF SQUID
Initial/simple configuration:
#vim squid.conf
#Go to Access controls sessionacl int src 192.168.1.0/255.255.255.0
http_access allow int
Here the acl is configured for the internal and the permission is given by the operator http_access. The squid operates from the top to bottom of the configuration file. so once if the search pattern is found squid will terminate the search in the config file and will start processing.
Testing the squid by squidclient:
#which squidclient
/usr/sbin/squidclient
This is a small utility that retrieves the objects in the squid server cahce.#squidclient --help
Shows the options in squidclient# squidclient -h localhost -p 3128 http://www.google.com
or#squidclient http://www.google.com
This will return the cache from the squid server located at localhost:3128. if the squid is running on localhost with the default port no need to specify the -h (host) of -p (port) options.# squidclient -g 3 http://www.google.com
This will tell that how speed the squid preformance to get the page http://www.google.com downloaded. and does for the 3 times (0-for infinite). we can verify the timing and can check whether the pages are cached or not. If there is caching happens then only the initial query will take time and others will take considerable less time. But if the webpage is not serverd from cache then all the request will have the same time. The caching permissions in the website (html code) is honoured by squid.# squidclient -v http://www.google.com
This will dump all the contents of the webpage to the STDOUT.#squidclient -h squidserver.domain.com -g 3 http://www.google.com
This will make query about the webcache to the given URL with the remote squid server.Applying the proxy settings for textbased http client/shell based tools.
Steps to enabling the proxy in wget/lftp/lynx
Step 1:
export http_proxy=http://proxy.domain.com:3128
This variable has been used by almost all the text based http clients.Step 2:
now start the clients
# wget http://remotewebiste.com
Now check the squid access log to find out the access request done by the client.# lftp http://remotewebiste.com
This will allow to download the http pages. This tool will honour the 'http_proxy' variable.#lynx http://remotewebiste.com
This will serve the webpage from the proxy
Thursday, September 10, 2009
How To Block Ads And Banners In SafeSquid Proxy Server
This summary is not available. Please
click here to view the post.
Wednesday, May 13, 2009
squid Transparent server configuration (Old using http_accel)
http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
#--
no_cache deny QUERY
cache_mem 100 MB
#--------
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
#--
dns_nameservers 192.168.1.7 202.56.250.5 202.56.230.6
#---------
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#---------- Full Acces Define--
acl admin src 192.168.1.99 192.168.1.12 192.168.1.75 192.168.1.76 192.168.1.124 192.168.1.129
http_access allow admin
acl murahari src 192.168.1.145
#------------------------------------------
acl download urlpath_regex "/etc/squid/blocks.files.acl"
acl local src 192.168.1.0/255.255.255.0
#acl local2 src 192.168.0.0/255.255.255.0
http_access deny download
deny_info ERR_BLOCKED_FILES download
#########--------------------- Blocking URLS ---------
acl valid_sites url_regex "/etc/squid/valid_sites.txt"
http_access allow valid_sites
acl music_domains url_regex "/etc/squid/block/music/domains"
acl music_urls url_regex "/etc/squid/block/music/urls"
acl movies_domains url_regex "/etc/squid/block/movies/domains"
acl movies_urls url_regex "/etc/squid/block/movies/urls"
acl gamble_domains url_regex "/etc/squid/block/gamble/domains"
acl gamble_urls url_regex "/etc/squid/block/gamble/urls"
acl chat_domains url_regex "/etc/squid/block/chat/domains"
acl chat_urls url_regex "/etc/squid/block/chat/urls"
#acl webmail_domains url_regex "/etc/squid/block/webmail/domains"
#acl webmail_urls url_regex "/etc/squid/block/webmail/urls"
acl dating_domains url_regex "/etc/squid/block/dating/domains"
acl dating_urls url_regex "/etc/squid/block/dating/urls"
acl webradio_domains url_regex "/etc/squid/block/webradio/domains"
acl webradio_urls url_regex "/etc/squid/block/webradio/urls"
#acl _domains url_regex "/etc/squid/block"
acl share url_regex "/etc/squid/block/share/urls"
acl virus url_regex majesty italy-fund exitexchange trafficholder tamotua
acl proxyservers url_regex orkut rapidshare
acl proxyservers url_regex orkut proxy proxi prox rapidshare
acl rapidshare url_regex rapid
acl ncbi url_regex ncbi
http_access allow ncbi
http_access allow rapidshare murahari
http_access deny share
http_access deny virus
http_access deny proxyservers
http_access deny music_domains
http_access deny music_urls
http_access deny movies_domains
http_access deny movies_urls
http_access deny gamble_domains
http_access deny gamble_urls
http_access deny chat_domains
http_access deny chat_urls
#http_access deny webmail_domains
#http_access deny webmail_urls
http_access deny dating_domains
http_access deny dating_urls
http_access deny webradio_domains
http_access deny webradio_urls
###########---------------------------------##################
http_access allow local
#http_access allow local2
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname firecone
coredump_dir /var/spool/squid
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
//Lin u X u niL
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
#--
no_cache deny QUERY
cache_mem 100 MB
#--------
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
#--
dns_nameservers 192.168.1.7 202.56.250.5 202.56.230.6
#---------
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#---------- Full Acces Define--
acl admin src 192.168.1.99 192.168.1.12 192.168.1.75 192.168.1.76 192.168.1.124 192.168.1.129
http_access allow admin
acl murahari src 192.168.1.145
#------------------------------------------
acl download urlpath_regex "/etc/squid/blocks.files.acl"
acl local src 192.168.1.0/255.255.255.0
#acl local2 src 192.168.0.0/255.255.255.0
http_access deny download
deny_info ERR_BLOCKED_FILES download
#########--------------------- Blocking URLS ---------
acl valid_sites url_regex "/etc/squid/valid_sites.txt"
http_access allow valid_sites
acl music_domains url_regex "/etc/squid/block/music/domains"
acl music_urls url_regex "/etc/squid/block/music/urls"
acl movies_domains url_regex "/etc/squid/block/movies/domains"
acl movies_urls url_regex "/etc/squid/block/movies/urls"
acl gamble_domains url_regex "/etc/squid/block/gamble/domains"
acl gamble_urls url_regex "/etc/squid/block/gamble/urls"
acl chat_domains url_regex "/etc/squid/block/chat/domains"
acl chat_urls url_regex "/etc/squid/block/chat/urls"
#acl webmail_domains url_regex "/etc/squid/block/webmail/domains"
#acl webmail_urls url_regex "/etc/squid/block/webmail/urls"
acl dating_domains url_regex "/etc/squid/block/dating/domains"
acl dating_urls url_regex "/etc/squid/block/dating/urls"
acl webradio_domains url_regex "/etc/squid/block/webradio/domains"
acl webradio_urls url_regex "/etc/squid/block/webradio/urls"
#acl _domains url_regex "/etc/squid/block"
acl share url_regex "/etc/squid/block/share/urls"
acl virus url_regex majesty italy-fund exitexchange trafficholder tamotua
acl proxyservers url_regex orkut rapidshare
acl proxyservers url_regex orkut proxy proxi prox rapidshare
acl rapidshare url_regex rapid
acl ncbi url_regex ncbi
http_access allow ncbi
http_access allow rapidshare murahari
http_access deny share
http_access deny virus
http_access deny proxyservers
http_access deny music_domains
http_access deny music_urls
http_access deny movies_domains
http_access deny movies_urls
http_access deny gamble_domains
http_access deny gamble_urls
http_access deny chat_domains
http_access deny chat_urls
#http_access deny webmail_domains
#http_access deny webmail_urls
http_access deny dating_domains
http_access deny dating_urls
http_access deny webradio_domains
http_access deny webradio_urls
###########---------------------------------##################
http_access allow local
#http_access allow local2
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname firecone
coredump_dir /var/spool/squid
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
//Lin u X u niL
Subscribe to:
Posts (Atom)