Tunneling by Example

ssh -g -L 8888:192.168.1.135:3389 localhost

I am saying on my localhost(any Linux box) , i want the port 8888 to be listening only to incoming connections to 192.168.1.135 .

So if anyone connects to my localhost on port 8888, my localhost will forward the connection to 192.168.1.135 on port 3389.

Port 3389 is the Windows RDP port number, so the example above is an rdp connection to the windows box 192.168.1.135.

So the connection would look like this below

where MyWin is the localhost

rdp.PNG

 

Advertisement

A Skeleton Key of Unknown Strength

Dan Kaminsky's Blog

TL;DR:  The glibc DNS bug (CVE-2015-7547) is unusually bad.  Even Shellshock and Heartbleed tended to affect things we knew were on the network and knew we had to defend.  This affects a universally used library (glibc) at a universally used protocol (DNS).  Generic tools that we didn’t even know had network surface (sudo) are thus exposed, as is software written in programming languages designed explicitly to be safe. Who can exploit this vulnerability? We know unambiguously that an attacker directly on our networks can take over many systems running Linux.  What we are unsure of is whether an attacker anywhere on the Internet is similarly empowered, given only the trivial capacity to cause our systems to look up addresses inside their malicious domains.

We’ve investigated the DNS lookup path, which requires the glibc exploit to survive traversing one of the millions of DNS caches dotted across the Internet. …

View original post 3,986 more words

Reclaiming disk space from Oracle datafiles

You want to reclaim some valuable Oracle estate – disk space, from your datafile(s).

ALTER DATABASE DATAFILE … RESIZE can reclaim the space at the end of the datafile,

But if you tried it and got the error below, then you must have chosen some arbitrary values which tried to reclaim beyond that is possible.

1
ORA-03297: file contains used data beyond requested RESIZE value

There is a minimum value that you can’t go beyond called the ‘datafile high water mark’.

So, how do you find this minimum value – the datafile’s high water mark? Query the view dba_extents
WARNING!!

On a large database system, do not query the dba_extents  view/table.You can query DBA_EXTENTS to know that. But   It runs forever and is resource-intensive.

SO WHAT DO WE DO

We query  SYS.X$KTFBUE instead, which is the underlying fixed table that gives extent allocation in Locally Managed Tablespaces.

Here is the query:

set linesize 1000 pagesize 0 feedback off trimspool on
with
 hwm as (
  -- get highest block id from each datafiles ( from x$ktfbue as we don't need all joins from dba_extents )
  select /*+ materialize */ ktfbuesegtsn ts#,ktfbuefno relative_fno,max(ktfbuebno+ktfbueblks-1) hwm_blocks
  from sys.x$ktfbue group by ktfbuefno,ktfbuesegtsn
 ),
 hwmts as (
  -- join ts# with tablespace_name
  select name tablespace_name,relative_fno,hwm_blocks
  from hwm join v$tablespace using(ts#)
 ),
 hwmdf as (
  -- join with datafiles, put 5M minimum for datafiles with no extents
  select file_name,nvl(hwm_blocks*(bytes/blocks),5*1024*1024) hwm_bytes,bytes,autoextensible,maxbytes
  from hwmts right join dba_data_files using(tablespace_name,relative_fno)
 )
select
 case when autoextensible='YES' and maxbytes>=bytes
 then -- we generate resize statements only if autoextensible can grow back to current size
  '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
   ||'M from '||to_char(ceil(bytes/1024/1024),999999)||'M */ '
   ||'alter database datafile '''||file_name||''' resize '||ceil(hwm_bytes/1024/1024)||'M;'
 else -- generate only a comment when autoextensible is off
  '/* reclaim '||to_char(ceil((bytes-hwm_bytes)/1024/1024),999999)
   ||'M from '||to_char(ceil(bytes/1024/1024),999999)
   ||'M after setting autoextensible maxsize higher than current size for file '
   || file_name||' */'
 end SQL
from hwmdf
where
 bytes-hwm_bytes>1024*1024 -- resize only if at least 1MB can be reclaimed
order by bytes-hwm_bytes desc
/
This was what I got from my system
 It generates the resize statements with the right values
/* reclaim    2370M from    2505M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/undotbs01.dbf’ resize 136M;
/* reclaim     593M from    3072M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/euna_data_02.dbf’ resize 2480M;
/* reclaim     248M from     256M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/euna_rsa_data_01.dbf’ resize 9M;
/* reclaim     243M from     256M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/euna_rsa_idx_01.dbf’ resize 14M;
/* reclaim     141M from     142M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/users01.dbf’ resize 2M;
/* reclaim      41M from     710M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/sysaux01.dbf’ resize 670M;
/* reclaim       3M from    1390M */ alter database datafile ‘/u02/app/oracle/oradata/CHARLES/system01.dbf’ resize 1388M;

Running external jobs in Oracle

If as an Oracle  DBA you want to  generate a file within oracle and ftp it to an ftp server using an oracle scheduler_job .You probably might run into an error like this below

ORA-27369: job of type EXECUTABLE failed with exit code: Permission denied

The solution is easy.

  • On the unix  oracle server run the following as oracle

          ls -l  $ORACLE_HOME/rdbms/admin/externaljob.ora

          The result should return something similar to this depending on your file tree layout

          –rw-r—– 1 root oinstall 1536 Apr 16 17:23 /u01/app/oracle/product/11.2-EE/rdbms/admin/externaljob.ora

  • As you can see the file by default is read only for oracle,so let’s sudo su – and edit it

          sudo su –

          vi  /u01/app/oracle/product/11.2-EE/rdbms/admin/externaljob.ora

           change the run_user from nobody to oracle and change the run_group from nobody to oinstall as show below and save it

           run_user = oracle
           run_group = oinstall

  •            Then you are done.The job should run now

 

Automating the removal of statspack snapshots

In case you find yourself in one of those shops with the good old statspack!!
Every child likes new toys but don’t throw your dummy out yet,statspack is still good.
It does eat up space if you set it to run regularly and you don’t want to make your already miserable life difficult by manually deleting snapshots.
You can be proactive by setting up a procedure to do the cleaning job for you.
Try this:

##I want to delete all snapshots that are older than 14 days in a RAC cluster for a particuler instance.
##Remember in RAC each instance has it’s own snapshot

create or replace procedure perfstat.purge_snaps as
max_snap number;
min_snap number;
database_id number;
inst_no number;
snap_number number;
ret number ;
begin

ret := 0;
select instance_number into inst_no from v$instance;
select count(*) into ret from stats$snapshot where instance_number=inst_no and snap_time 0 then

select min(s.snap_id) , max(s.snap_id),max(di.dbid) into min_snap,max_snap,database_id
from stats$snapshot s , stats$database_instance di
where s.dbid= di.dbid
and s.instance_number= di.instance_number
and s.instance_number=inst_no
and di.startup_time = s.startup_time
and s.snap_time min_snap, i_end_snap =>max_snap, i_snap_range => true, i_extended_purge => false, i_dbid=> database_id , i_instance_number=> inst_no);

dbms_output.Put_line(‘The number of snapshots deleted are ‘||to_char(snap_number));

end if;
end; /

Who has got the locks?

**Those times as a DBA when you want to track and hunt down who or what is causing high enq locks?
Sweet mum,try this.Note that if you run this on a RAC database the information returned will be relevant only to the instance.

SELECT vh.sid locking_sid, vs.status status, vs.program program_holding, vw.sid waiter_sid, vsw.program program_waiting FROM v$lock vh, v$lock vw, v$session vs, v$session vsw WHERE (vh.id1, vh.id2) IN (SELECT id1, id2 FROM v$lock WHERE request = 0 INTERSECT SELECT id1, id2 FROM v$lock WHERE lmode = 0) AND vh.id1 = vw.id1 AND vh.id2 = vw.id2 AND vh.request = 0 AND vw.lmode = 0 AND vh.sid = vs.sid AND vw.sid = vsw.sid;

who is trashing my temp segments?

–When an Oracle db is running out of temporary tablespace and you need to find out which SQL is using the temp tablespace.

**First, check the temp space usage:

select y.Total_MB, y.Total_MB – round(x.used_blocks*8/1024) Current_Free_MB, round(used_blocks*8/1024) Current_Used_MB,round (max_used_blocks*8/1024) Max_used_MB from v$sort_segment x,(select round(sum(bytes)/1024/1024) Total_MB from dba_temp_files ) y;
** Then,let’s try and fish out who is using the temp segments

col hash_value for a40
col tablespace for a10
col username for a15
set linesize 132 pagesize 1000

SELECT s.sid, s.username, u.tablespace, s.sql_hash_value||’/’||u.sqlhash hash_value, u.segtype, u.contents, u.blocks FROM v$session s, v$tempseg_usage u WHERE s.saddr=u.session_addr order by u.blocks desc;

–As you may see,segment types differ. Most of the time, SORT is the one we need to check.
–If you are lucky enough, or the client only has 1 SQL cursor, you will get the SQL hash value in the above result.
–NB: the value of x can be got from the sid of the session with the highest sorted blocks from the previous previous query while y is the schema’s name

–However, the tempspace can be used by any open cursor in that session. The current SQL is not necessarily the culprit. In that case, we can check it from v$sql using the query below:

col hash_value for 999999999999
select hash_value, sorts, rows_processed/executions from v$sql where hash_value in (select hash_value from v$open_cursor where sid=x) and sorts > 0 and PARSING_SCHEMA_NAME=’y’ order by rows_processed/executions desc;