Friday, October 23, 2009

Linux Securirty Notes 14: Squid notes 5: ACL - Cache Management

Squid ACL Cache Management:
    Here we can will have a look in to the squid cache management. We can make squid to start caching only a certain number of websites, or u can turn off all the caching ability or even we can run with full caching ability of squid.

Setting up squid as a non-caching service:
    In this mode the squid will not cache any of the request. And will be running only for logging the access and honouring the ACLs defined according to the company rule.

Disable caching for request from all the source address
# vim squid.conf
------------
acl    noncaching_hosts    src 0.0.0.0/0.0.0.0
no_cache    deny    noncaching_hosts
------------
# reload squid

    no_cache tag is used to configure the caching management in ACL. Here with the deny option no_cache makes all the request from the given src address to run in non cache-mode. So monitor the access.log file for any "HITs" to verify. "HITS" are denoted for the caching sites and "MISS" for non caching sites. So from now itself we will get only the MISS in access.log.

Disable caching for specific sites in Internet/Intranet:
# vim squid.cond
---------
acl    no_cache_sites    dstdomain    .domain.com
no_cache    deny    no_cache_sites
---------
# reload squid

    This will make squid to disable caching only for the given domain. But all other domains will be cached. A number of domains can be defined to a file as well.

Disable caching for dynamic sites (sites which runs on .php/.asp/.pl/.cgi/.jsp etc)
       
# vim squid.conf
---------
acl    no_cache_dynamic_sites    url_regex    "/etc/squid/no_cache_sites"
no_cache    deny no_cache_dynamic sites
---------
# vim /etc/squid/no_cache_sites
---------
\.php$
\.cgi$
\.jsp
\.pl$
\.asp$
---------
# reload squid

    This will make squid to not cache the sites with has the expression listed in the file.

Caching based on the User access using IP address:
End result should be no cache activity for the request from Admins(192.168.100.0) and executive (192.168.200.3)But every one else should be cached.

# vim squid.conf
--------
acl no_cache_users    src    192.168.100.0/24    192.168.200.3
no_cache    deny    no_cache_users
--------

# reload squid

    Here all the request from the network 192.168.100.0/24 and IP address 192.168.200.3 will not be cached. But all other request will be cached.

Linux Securirty Notes 14: Squid notes 4: ACLs 2

This summary is not available. Please click here to view the post.

Linux Securirty Notes 14: Squid notes 4: ACLs 1

Squid ACLs
          The importance of access controls cannot be overstated.  It is important to have a good understanding of how to control who uses squid.  When access controls are created you will use two components.  The first is the acl which defines, clients, IP Addresses, hostnames, origin port numbers and request methods.  Once these are created they are combined with rules for the acls. 
Syntax:
1. Define ACL
acl - a unique_name - type(any criterea such as port/src/dst/dstdomain/srcdomain/time_of_day etc)- decission_string
2. Apply ACL using criteria
http_access - permission(allow|deny) - acl unique name [! means negative rule]

Eg:-
acl Safe_port port 80
http_access deny !Safe_ports (denies all the destination port other than port 80)

    The acl in the config file is matched by squid from upper to bottom and executes the first found rule for acl.

SCENARIOS BASED ON ACLs

Restricting a single host (192.168.10.57) using ACL
# vim squid.conf
--------
acl badhost src 192.168.10.57
http_access allow !badhost
or
http_access deny badhost
--------

# reload squid

Restricting Multiple hosts
# vim squid.conf
--------
acl badhosts src 192.168.10.50 192.168.10.51 192.168.10.52 192.168.10.53
http_access allow !badhosts
# or use the following
http_access deny badhosts
--------
# reload squid

ACLs Lists

Usually ACLs can be defined in 2 ways.

1. redefining the same rules on other lines
eg:- acl Safe_ports are defined in such a way
----------
acl Safe_ports port 80
acl Safe_ports port 443
acl Safe_ports port 70
http_access deny ! Safe_ports

----------

2. Defining the list to a single file.
# vim /etc/squid/badhosts
-------
192.168.1.50
192.168.1.51
192.168.1.52
192.168.1.53
-------

# vim squid.conf
-------
acl badhosts src "/etc/squid/badhosts"
http_access deny badhosts
-------
# reload squid

    Here we made the acl to lookup in the text file for parsing the request.

Define ACL based on TIME:
         Squid recognizes using the follwoing syntax
Day of the week (DOW)
S = Sunday
M = Monday
T = Tuesday
W = Wednesday
H = tHursday
F = Friday
A = sAturday

Hours and Minutes
    hh:mm-hh:mm (We have to use the 24Hrs time format)

Restrict access between working/buisness hours
syntax:
acl work_hours    time    [days_of_week] [hours_of_day]
We can illustrate it with the following examples.
To deny access to squid between 9.30AM to 5PM everyday we can use the following syntax
#vim squid.conf
-----
acl    work_time    time    09:30-17:00
http_access deny work_time
-----
# reload squid

    This will deny all the request to squid between the time 9.30 to 5.00

To deny access to squid between 9.30AM to 12:20PM and 2:00PM to 6:00PM everyday we can use the following syntax
#vim squid.conf
-----
acl    work_time    time    09:30-12:20
http_access deny work_time
acl    work_time2    time    14:00-18:00
http_access deny work_time2
-----
# reload squid

    This will deny the internet access in the given time period 9.30AM to 12:20PM and 2:00PM to 6:00PM everyday. If we need to bypass this rule to anyother users, define a rule that permits the access above this ACL.

To deny access to squid between 9.30AM to 5PM on Monday Wednesday Thursday Friday and Saturday we can use the following syntax
#vim squid.conf
-----
acl    work_time    time     MWHFA    09:30-17:00
http_access deny work_time
-----
# reload squid

    This will deny access on MWHFA weekdays between 9.30AM to 5PM

Defining the access to destination domains using ACL.
Two ways can be used to obtain the result
  • By creating the rules inside the squid.conf
  • By creating a List of destination domains in text file
1. Deny destination domains By creating the rules inside the squid.conf

# vim squid.conf
------
acl    time_waste_sites    dstdomain    .yahoo.com
acl    time_waste_sites    dstdomain    .msn.com
acl    time_waste_sites    dstdomain    .orkut.com
acl    time_waste_sites    dstdomain    .ebay.com
http_access    deny    time_waste_sites
------
# reload squid

    This will deny all the website of the domains defined in the squid.conf. eg:- mail.yahoo.com, app.yahoo.com ebay.com, test.ebay.com etc.

2. Deny destination domains By creating the list of destination files
# vim /etc/squid/time_waste_domains.txt
-----
.msn.com
.orkut.com
.ebay.com
-----
# vim squid.conf
-----
acl time_waste    dstdomain    "/etc/squid/time_waste_domains.txt"
http_access    deny    time_waste
-----
# reload squid


ACL ANDED RULES.
    This is used to combine the ACL rules using the AND logic. For example this is use full for defining the rule to deny the access to certain websites during business hours.

Denying Certain Sites At Given Time using ACL ANDing Rule:

# vim squid.conf
-----------
acl    time_waste    time     MWHFA    09:30-17:00
acl    waste_domain    dstdomain    "/etc/squid/time_waste_domains.txt"
http_access    deny    time_waste    waste_domain
-----------
# reload squid

    This will deny the access to the sites defined in the file /etc/squid/time_waste_domains.txt during the time 09:30-17:00 on DOW M,W,H,F & A

Deny certain sites At given time for a number of users using ACL anding rule:
# vim squid.conf
-----------
acl    lazy_workers    src    192.168.233.0/24
acl    time_waste    time     MWHFA    09:30-17:00
acl    waste_domain    dstdomain    "/etc/squid/time_waste_domains.txt"
http_access    deny    lazy_workers    time_waste    waste_domain
-----------
# reload squid

    This will deny the access to the sites defined in the file /etc/squid/time_waste_domains.txt during the time 09:30-17:00 on DOW M,W,H,F & A for the hosts having given IP range.

Anding Using Criteria defnition:
    Scenario:
          We have to create a rule on the casual websites access during the business hours.
In the above scenario we have to consider certain criterias
1. Work Hours = MTWHF    9:00-18:00
2. Source Subnets = 192.168.1.0/24
3. Permit access to search domains    = google.com should allow
          So now we shall begin to define the ACL to meet above requirement.

# vim squid.conf
----------
#Acl to allow the search domains
acl    work_sites    dstdomain    .google.com
http_access    allow    work_sites
# ACL to deny all the sites other than work_sites for lazy_guys at working hours in week days
acl    lazy_guys    src 192.168.1.0/24
acl    work_hours    time MTWHF    09:00-18:00
http_access    deny    lazy_guys    work_hours   
----------
# reload squid

    This will only allow the google.com for the lazy_guys at week days from 9:00 to 6:00 pm. But the access to other sites will be given, for the time which is not defined here (non office hours and week ends)
Note:-
    The ANDed rules in the ACLs will be working only if both the criteria matches, i.e, the request from the source IP (192.168.1.0/24) at defined time (Mon-Fri 9:00 to 6:00 pm). If this not matches then the default rule will be applied.

Wednesday, October 21, 2009

Linux Securirty Notes 14: Squid notes 3: Cachemngr & Port configuration

Squid implementing cachemgr.cgi
    This script will return the information of squid process/performance such as memory usage, cpu usage, number of HITS & MISS etc.. The tool cachemgr.cgi is included in the squid rpm package.
# rpm -ql squid | grep cachemgr.cgi
    This will show the installation path of the cgi script.
The script is processed by apache so now we have to find out where the apache process the cgi scripts.
# grep -i scriptalias httpd.conf
-----
ScrriptAlias /cgi-bin/  "/var/www/cgi-bin/"
-----
    So we have to place the cachemgr.cgi in to "/var/www/cgi-bin/" directory to process by apache.
# cp cachemgr.cgi /var/www/cgi-bin/
# reload apache


Open the web browser and navigate to http://localhost/cgi-bin/cachemgr.cgi
    This will open the page which ask the information of squid server and the cache manager credentials. Default credentials is null. continue to explore more in the scripts. A menu will be available with all the system utilization report tools. Brows through each options to gather the information of the squid process and
statistics.

Changing the default port of Squid
# vim /etc/squid/squid.conf
-----
http_port 8080

-----
# reload squid
    This changes the port to 8080. "https_port" is an another derivative used as a accelerator to speed up the back end SSL based servers.

squid - Safe ports
    This is the list of the ports to which squid will make the destination connection to. The safe ports are defined using the ACL
Below is a list of safe port configuration in squid
#vim squid.conf
----------
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443 563

http_access deny !Safe_port

----------
    The list of ports will be allowed by squid, all other ports will be denied, with the error html file to the client stating that"access denied viz ACL".For enable or disable any ports that squid need to server we can use this safe port ACL. In short we can restrict the access of destination port.

Tuesday, October 20, 2009

Linux Securirty Notes 14: Squid notes 2: Log analysis

Squid Logs
    By default squid logs to /var/log/squid/ directory.  We will have a detailed look in to each and every files inside this directory.
squid.out    
      This contains the details about the caching (Initializing RAM & Swap) that happened while starting the squid. Only the basic system info.
access.log    
     Registers the caching activities, HIT or MISS and user access logs etc. This is the main log file that registers the user activities and everything about the request received by squid server.
Fields in access.log:
-----------
11298722788.699    15098    192.168.1.1    TCP_MISS/200 2048 GET http://www.yahoo.com/    -    DIRECT/64.20.165.254    text/html
-----------
field1:  Time stamp(unix epoch time format(milliseconds from jan 1970))
field2:  Elapsed_time of page/object delivery
field3:  Remote host
field4:  Code/Status [TCP_MISS/200 (squid actions/http-status)The status error is same as the http error codes]
field5:  Bytes delivered to the client
field6:  Method used to retrieve the page.
field7:  The destination URL
field8:  IDENT identification. This will tell which user is running the program and what client is running.
field9:  Hierarchy - This tells, what the squid have done to return the pages (DIRECT/64.20.165.254).
field10: Mime type
Note :-
    Squid also supports the common log formats (CLF). This will record less details.

To enable the common type of logging by squid
#edit /etc/squid/squid.conf
-----
emulate_httpd_log    on
-----
    This will make the squid to log through the common log format. This will be usefull if we use any third party tool to parse the squid  logs.

cache.log
     Stores errors and debugging information of the squid daemons. i.e, system information logs
store.log
    This maintains the squid cache content logs. i.e, details about the stored objects in the cache.
Fields:
----------
22113499023.433  RELEASE  00  FFFFFFFF  89037DHH29739DHD927AC0389  304  112399483  -1  -1  unknown  -1/0  GET http://www.yahoo.com/image.jpg
----------
Field1: Time stamp(unix epoch time format(milliseconds from jan 1970)) `date +%s`
Field2: Action done ne cache (Release,create,swapout(saved from the swap to disk),swapin (moved to RAM))
Field3: Folder number of the cache     (/var/spool/squid will contain many directories that stores the cache. This filed refers to it)
Filed4/5: File name inside the folder that denoted in the field 3
Field6:  HTTP status, this follows the standard http errors.
Field7: Date that included in the header of the file that send to the client.
Field8: The last modified time stamp of the file that served to the client
Field9:  The expiration time of the contents
Field10: Mime Type
Filed11:  Size of the content (content_length/actual size)
Field12: Method used to get the destination
Field13: The exact url that cached.

Log Analysis Using WebAlizer using Common Log Format CLF.
   
To configure the WebAlizer we need to make squid to log in Common Log Format
#vim squid.conf
-----
emulate_httpd_log    on
-----
#service squid restart
    This will make squid to start logging in CLF to /var/log/squid/access.log

Installing the webalizer.

    The default installation of the RHEL includes the package webalizer, if not install using yum
# rpm -qa |grep -i webalizer
webalizer-ver.xx.xx

Configure the WebAlizer to get the log parsed from squid.(WebAlyzer will parse the squid native logs too)

# vim /etc/webalizer.conf
------------
#change
LogFile    /var/log/httpd/access_log
#to
LogFile    /var/log/squid/access.log
HostName    mysquidserver

------------
Now run the webalizer
# webalizer -c /etc/webalizer.conf
    This will process the squid.log file and will send the output into the output folder defined in the webalizer.conf file. This folder contains the index.html file which can be served using a webserver

# Now configure and start the webserver to serve the html page created by the webalizer.

Configure the webalizer to use the Squid Native log format

# comment the option emulate_httpd_log on in squid.conf
# restart the squid service to start logging in squid native log format


Now configure & exicute the webalizer
# vim webalizer.conf
-----
LogType squid
-----
# webalizer -c /etc/webalizer.conf
    This will make webalizer to start parsing the squid native logs and generated the .html file. Now navigate through the record using webserver.

Linux Securirty Notes 14: Squid notes 1: Introduction

SQUID
Intro

     Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator. It runs on most available operating systems,
including Windows and is licensed under the GNU GPL.
     Thousands of web-sites around the Internet use Squid to drastically increase their content delivery. Squid can reduce your server load and improve delivery speeds to clients. Squid can also be used to deliver content from around the world - copying only the content being used, rather than inefficiently copying everything. Finally, Squid's advanced content routing configuration allows you to build content clusters to route and load balance requests via a variety of web servers.
" [The Squid systems] are currently running at a hit-rate (the web page that served from cache) of approximately 75%, effectively quadrupling the capacity of the Apache servers behind
them. This is particularly noticeable when a large surge of traffic arrives directed to a particular page via a web link from another site, as the caching efficiency for that page will be nearly 100%. "

     The normal setup is caching the contents of an unlimited number of webservers for a limited number of clients. Another setup is “reverse proxy” or “webserver acceleration” (using http_port 80 accel vhost). In this mode, the cache serves an unlimited number of clients for a limited number of—or just one—web servers.

Initialising SQUID
  1. install the squid
  2. configure the squid
  3. start the squid service

1. install the squid
    Squid can be installed by rpm. Squid uses lot of system resources to server the request. So best configuration on servers will defnitly improve the squid service.It uses the cache so more the RAM better the performance.
For eg:- for a squid server of 100 users
need a /var space of 150G
RAM - the more the better
    Even if you install the squid in lower configuration, to server a large communit squid will still server by optimising the current Hardware config. But for better result it is recommanded with higher configuration.


# rpm -qpl squid.ver.rpm
This will tell us that what are the changes that will be done to the system after the installation of the squid package.
# rpm -ivh squid-ver.rpm
This will install the squid package.
/usr/sbin/squid is the daemon installed by the package
/usr/sbin/squidclient is the binary installed, which is used to make query to the squid server to check whether the cache is available in the local or remote squid server

Note:-
     Squid has a master process that will spawn the childs to handle the process. it binds to the default port 3128. The child proces is binding to the port 3128 not the master process. The same squid process is also bind to the udp port 3130 which used distribute the load for other squid servers, which is a peer to peer communication to balance the load.

2. configure the squid
     The client makes the request to the proxy server and the proxy server will contact the detination, caches the page and serves content. This cache will be used for later serve .The Access control should be configured to get it working. modify /etc/squid/squid.conf file to configure the squid


3. start the squid service
     The rpm installation can be started using service and can be configured using chkconfig command

CONFIGURATION OF SQUID

Initial/simple configuration:

#vim squid.conf
#Go to Access controls session
acl int src 192.168.1.0/255.255.255.0
http_access allow int

   Here the acl is configured for the internal and the permission is given by the operator http_access. The squid operates from the top to bottom of the configuration file. so once if the search pattern is found squid will terminate the search in the config file and will start processing.

Testing the squid by squidclient:

#which squidclient
/usr/sbin/squidclient
    This is a small utility that retrieves the objects in the squid server cahce.
#squidclient --help
Shows the options in squidclient
# squidclient -h localhost -p 3128 http://www.google.com
or
#squidclient http://www.google.com
     This will return the cache from the squid server located at localhost:3128. if the squid is running on localhost with the default port no need to specify the -h (host) of -p (port) options.

# squidclient -g 3 http://www.google.com
     This will tell that how speed the squid preformance to get the page http://www.google.com downloaded. and does for the 3 times (0-for infinite). we can verify the timing and can check whether the pages are cached or not. If there is caching happens then only the initial query will take time and others will take considerable less time. But if the webpage is not serverd from cache then all the request will have the same time. The caching permissions in the website (html code) is honoured by squid.
# squidclient -v http://www.google.com
     This will dump all the contents of the webpage to the STDOUT.
#squidclient -h squidserver.domain.com -g 3 http://www.google.com
     This will make query about the webcache to the given URL with the remote squid server.

Applying the proxy settings for textbased http client/shell based tools.

Steps to enabling the proxy in wget/lftp/lynx
Step 1:
export http_proxy=http://proxy.domain.com:3128
     This variable has been used by almost all the text based http clients.

Step 2:
now start the clients
# wget http://remotewebiste.com
     Now check the squid access log to find out the access request done by the client.
# lftp http://remotewebiste.com
     This will allow to download the http pages. This tool will honour the 'http_proxy' variable.
#lynx http://remotewebiste.com
     This will serve the webpage from the proxy

Monday, October 19, 2009

OpenLdap: Study Guide Integration with Sendmail,Postfix,Apache & Samba