if you create an ftp user on a solaris system, the user can change directory to anywhere and read the list of objects in it with default configurations.
if you want to make ftp more secure you can use ftpaccess file in /etc/ftpd/ directory.
here are the steps:
- create a group for ftp
shell > groupadd ftpuser
- create a user for ftp
shell > useradd -g ftpuser -d /path/to/ftphome -s /bin/true ftpusername
note: there should be an entry in /etc/shells like "/bin/true", if not you cannot login to ftp server.
- edit /etc/ftpd/fptaccess file and add a line
guestgroup ftpuser
- then restart the ftp server
shell> svcadm restart svc:/network/ftp:default
Monday, March 31, 2008
Thursday, March 27, 2008
Are your tablespaces shrinkable? (oracle)
here is a sql script by Tom Kyte.
if you have problems with sizes of tablespaces you can shrink datafiles. but if there is data at the end of the datafile you cannot shrink them.
with this sql script you can see if you can shrink them.
after running script you should copy the shrink commands and run them..
set verify offcolumn file_name format a50 word_wrappedcolumn smallest format 999,990 heading "SmallestSizePoss."column currsize format 999,990 heading "CurrentSize"column savings format 999,990 heading "Poss.Savings"break on reportcompute sum of savings on report
column value new_val blksizeselect value from v$parameter where name = 'db_block_size'/
select file_name, ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest, ceil( blocks*&&blksize/1024/1024) currsize, ceil( blocks*&&blksize/1024/1024) - ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savingsfrom dba_data_files a, ( select file_id, max(block_id+blocks-1) hwm from dba_extents group by file_id ) bwhere a.file_id = b.file_id(+)/
column cmd format a75 word_wrapped
select 'alter database datafile '''file_name''' resize ' ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) 'm;' cmdfrom dba_data_files a, ( select file_id, max(block_id+blocks-1) hwm from dba_extents group by file_id ) bwhere a.file_id = b.file_id(+) and ceil( blocks*&&blksize/1024/1024) - ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) > 0/
http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html
if you have problems with sizes of tablespaces you can shrink datafiles. but if there is data at the end of the datafile you cannot shrink them.
with this sql script you can see if you can shrink them.
after running script you should copy the shrink commands and run them..
set verify offcolumn file_name format a50 word_wrappedcolumn smallest format 999,990 heading "SmallestSizePoss."column currsize format 999,990 heading "CurrentSize"column savings format 999,990 heading "Poss.Savings"break on reportcompute sum of savings on report
column value new_val blksizeselect value from v$parameter where name = 'db_block_size'/
select file_name, ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest, ceil( blocks*&&blksize/1024/1024) currsize, ceil( blocks*&&blksize/1024/1024) - ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savingsfrom dba_data_files a, ( select file_id, max(block_id+blocks-1) hwm from dba_extents group by file_id ) bwhere a.file_id = b.file_id(+)/
column cmd format a75 word_wrapped
select 'alter database datafile '''file_name''' resize ' ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) 'm;' cmdfrom dba_data_files a, ( select file_id, max(block_id+blocks-1) hwm from dba_extents group by file_id ) bwhere a.file_id = b.file_id(+) and ceil( blocks*&&blksize/1024/1024) - ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) > 0/
http://www.oracle.com/technology/oramag/oracle/04-sep/o54asktom.html
how to take tab delimited export with mysql
you can use mysqldump to take a dump of a table or a database.
when you use the command without a parameter dump is taken with insert statements.
if you want to have a tab delimeted export file you should use --tab paramter with mysqldump.
shell > mysqldump --tab=/path_to_dump_files database table
after this command table.sql and table.txt files will be created. sql for table structure and txt file for data.
when you use the command without a parameter dump is taken with insert statements.
if you want to have a tab delimeted export file you should use --tab paramter with mysqldump.
shell > mysqldump --tab=/path_to_dump_files database table
after this command table.sql and table.txt files will be created. sql for table structure and txt file for data.
Tuesday, March 18, 2008
Using ufsdump for directory backup
you can take incremental backups with ufsdump. here is the usage:
ufsdump 0ucf /pathto/backup/backup_1 /device
what about taking backup of a directory? Of course you can use tar, cpio or cp commands to take backup of a directory. But it might be useful to use ufsrestore command for restoring any corrupted data.
But there is a point, you cannot take incremental backup of directories with ufs dump. this feature is only for devices. so you have to take always zero level backup when you backup a directory.
ufsdump, updates the /etc/dumpdates file to control the backup levels. but it doesn't matter to update this file manual for a directory.
ufsdump 0ucf /pathto/backup/backup_1 /device
what about taking backup of a directory? Of course you can use tar, cpio or cp commands to take backup of a directory. But it might be useful to use ufsrestore command for restoring any corrupted data.
But there is a point, you cannot take incremental backup of directories with ufs dump. this feature is only for devices. so you have to take always zero level backup when you backup a directory.
ufsdump, updates the /etc/dumpdates file to control the backup levels. but it doesn't matter to update this file manual for a directory.
Wednesday, March 5, 2008
Maximum number of files under a directory (Solaris)
- Is there a limit to the number of files under a directory on Solaris OS?
- In fact, there is no such limit. But it is not a good situation if there are thousands of files under a directory. Because, operations like finding, opening, creating and deleting a file are affected as the number of files goes into tens of thousands.
(http://www.sun.com/bigadmin/xperts/sessions/20_jes/#8)
- In fact, there is no such limit. But it is not a good situation if there are thousands of files under a directory. Because, operations like finding, opening, creating and deleting a file are affected as the number of files goes into tens of thousands.
(http://www.sun.com/bigadmin/xperts/sessions/20_jes/#8)
Subscribe to:
Posts (Atom)