Archive for August, 2012

作者:AngryFox 分类: Uncategorized August 5th, 2012 暂无评论

1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引。

2.应尽量避免在 where 子句中对字段进行 null 值判断,否则将导致引擎放弃使用索引而进行全表扫描,
如:
select id from t where num is null
可以在num上设置默认值0,确保表中num列没有null值,然后这样查询:
select id from t where num=0
3.应尽量避免在 where 子句中使用!=或<>操作符,否则引擎将放弃使用索引而进行全表扫描。

4.应尽量避免在 where 子句中使用 or 来连接条件,否则将导致引擎放弃使用索引而进行全表扫描,如:
select id from t where num=10 or num=20
可以这样查询:
select id from t where num=10
union all
select id from t where num=20
5.in 和 not in 也要慎用,否则会导致全表扫描,如:
select id from t where num in(1,2,3)
对于连续的数值,能用 between 就不要用 in 了:
select id from t where num between 1 and 3
6.下面的查询也将导致全表扫描:
select id from t where name like ‘%abc%’
若要提高效率,可以考虑全文检索。
7.如果在 where 子句中使用参数,也会导致全表扫描。因为SQL只有在运行时才会解析局部变量,但优化程序不能将访问计划的选择推迟到运行时;它必须在编译时进行选择。然 而,如果在编译时建立访问计划,变量的值还是未知的,因而无法作为索引选择的输入项。如下面语句将进行全表扫描:
select id from t wherenum=@num
可以改为强制查询使用索引:
select id from t with(index(索引名)) wherenum=@num

8.应尽量避免在 where 子句中对字段进行表达式操作,这将导致引擎放弃使用索引而进行全表扫描。如:
select id from t where num/2=100
应改为:
select id from t where num=100*2

9.应尽量避免在where子句中对字段进行函数操作,这将导致引擎放弃使用索引而进行全表扫描。如:
select id from t where substring(name,1,3)=’abc’–name以abc开头的id
select id from t where datediff(day,createdate,’2005-11-30′)=0–‘2005-11-30’生成的id
应改为:
select id from t where name like ‘abc%’
select id from t where createdate>=’2005-11-30′ and createdate<’2005-12-1′

10.不要在 where 子句中的“=”左边进行函数、算术运算或其他表达式运算,否则系统将可能无法正确使用索引。

11.在使用索引字段作为条件时,如果该索引是复合索引,那么必须使用到该索引中的第一个字段作为条件时才能保证系统使用该索引,否

该索引将不会被使用,并且应尽可能的让字段顺序与索引顺序相一致。

12.不要写一些没有意义的查询,如需要生成一个空表结构:

select col1,col2 into #t from t where 1=0
这类代码不会返回任何结果集,但是会消耗系统资源的,应改成这样:
create table #t(…)

13.很多时候用 exists 代替 in 是一个好的选择:
select num from a where num in(select num from b)
用下面的语句替换:
select num from a where exists(select 1 from b where num=a.num)

14.并不是所有索引对查询都有效,SQL是根据表中数据来进行查询优化的,当索引列有大量数据重复时,SQL查询可能不会去利用索引,如一表中有字段sex,male、female几乎各一半,那么即使在sex上建了索引也对查询效率起不了作用。

15.索引并不是越多越好,索引固然可 以提高相应的 select 的效率,但同时也降低了 insert 及 update 的效率,因为 insert 或 update 时有可能会重建索引,所以怎样建索引需要慎重考虑,视具体情况而定。一个表的索引数最好不要超过6个,若太多则应考虑一些不常使用到的列上建的索引是否有 必要。

16.应尽可能的避免更新 clustered 索引数据列,因为 clustered 索引数据列的顺序就是表记录的物理存储顺序,一旦该列值改变将导致整个表记录的顺序的调整,会耗费相当大的资源。若应用系统需要频繁更新 clustered 索引数据列,那么需要考虑是否应将该索引建为 clustered 索引。

17.尽量使用数字型字段,若只含数值信息的字段尽量不要设计为字符型,这会降低查询和连接的性能,并会增加存储开销。这是因为引擎在处理查询和连接时会逐个比较字符串中每一个字符,而对于数字型而言只需要比较一次就够了。

18.尽可能的使用 varchar/nvarchar 代替 char/nchar ,因为首先变长字段存储空间小,可以节省存储空间,其次对于查询来说,在一个相对较小的字段内搜索效率显然要高些。

19.任何地方都不要使用 select * from t ,用具体的字段列表代替“*”,不要返回用不到的任何字段。

20.尽量使用表变量来代替临时表。如果表变量包含大量数据,请注意索引非常有限(只有主键索引)。

21.避免频繁创建和删除临时表,以减少系统表资源的消耗。

22.临时表并不是不可使用,适当地使用它们可以使某些例程更有效,例如,当需要重复引用大型表或常用表中的某个数据集时。但是,对于一次性事件,最好使用导出表。

23.在新建临时表时,如果一次性插入数据量很大,那么可以使用 select into 代替 create table,避免造成大量 log ,以提高速度;如果数据量不大,为了缓和系统表的资源,应先create table,然后insert。

24.如果使用到了临时表,在存储过程的最后务必将所有的临时表显式删除,先 truncate table ,然后 drop table ,这样可以避免系统表的较长时间锁定。

25.尽量避免使用游标,因为游标的效率较差,如果游标操作的数据超过1万行,那么就应该考虑改写。

26.使用基于游标的方法或临时表方法之前,应先寻找基于集的解决方案来解决问题,基于集的方法通常更有效。

27.与临时表一样,游标并不是不可使 用。对小型数据集使用 FAST_FORWARD 游标通常要优于其他逐行处理方法,尤其是在必须引用几个表才能获得所需的数据时。在结果集中包括“合计”的例程通常要比使用游标执行的速度快。如果开发时 间允许,基于游标的方法和基于集的方法都可以尝试一下,看哪一种方法的效果更好。

28.在所有的存储过程和触发器的开始处设置 SET NOCOUNT ON ,在结束时设置 SET NOCOUNT OFF 。无需在执行存储过程和触发器的每个语句后向客户端发送 DONE_IN_PROC 消息。

29.尽量避免大事务操作,提高系统并发能力。

30.尽量避免向客户端返回大数据量,若数据量过大,应该考虑相应需求是否合理。

今天
select * from 表名 where to_days(时间字段名) = to_days(now());

昨天
SELECT * FROM 表名 WHERE TO_DAYS( NOW( ) ) – TO_DAYS( 时间字段名) <= 1

7天
SELECT * FROM 表名 where DATE_SUB(CURDATE(), INTERVAL 7 DAY) <= date(时间字段名)

近30天
SELECT * FROM 表名 where DATE_SUB(CURDATE(), INTERVAL 30 DAY) <= date(时间字段名)

本月
SELECT * FROM 表名 WHERE DATE_FORMAT( 时间字段名, ‘%Y%m’ ) = DATE_FORMAT( CURDATE( ) , ‘%Y%m’ )

上一月
SELECT * FROM 表名 WHERE PERIOD_DIFF( date_format( now( ) , ‘%Y%m’ ) , date_format( 时间字段名, ‘%Y%m’ ) ) =1

作者:AngryFox 分类: Uncategorized August 4th, 2012 暂无评论

一、准备工作:
运行yum命令查看MongoDB的包信息

[root@vm ~]# yum info mongo-10gen
(提示没有相关匹配的信息,)

说明你的centos系统中的yum源不包含MongoDB的相关资源,所以要在使用yum命令安装MongoDB前需要增加yum源,也就是在 /etc/yum.repos.d/目录中增加 *.repo yum源配置文件,以下分别是针对centos 64位和32位不同的系统的MongoDB yum 源配置内容:

我们这里就将该文件命名为:/etc/yum.repos.d/10gen.repo

For 64-bit yum源配置:

vi /etc/yum.repos.d/10gen.repo

[10gen]
name=10gen Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0

For 32-bit yum源配置:

vi /etc/yum.repos.d/10gen.repo

[10gen]
name=10gen Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/i686
gpgcheck=0

根据自己的系统选择相应的配置内容

查看系统是32位还是64位的方法:

$ uname -a

含有x86_64的那说明是64位的,例如我的centos6.0 64bit系统执行这个命令后显示:

Linux vm.centos6 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux

做好yum源的配置后,如果配置正确执行下面的命令便可以查询MongoDB相关的信息:

查看mongoDB的服务器包的信息

[root@vm ~]# yum info mongo-10gen-server
****(省略多行不重要的信息)*********
Available Packages
Name : mongo-10gen-server
Arch : x86_64
Version : 1.8.2
Release : mongodb_1
Size : 4.7 M
Repo : 10gen
Summary : mongo server, sharding server, and support scripts
URL : http://www.mongodb.org
License : AGPL 3.0
Description: Mongo (from “huMONGOus”) is a schema-free document-oriented
: database.
:
: This package provides the mongo server software, mongo sharding
: server softwware, default configuration files, and init.d scripts.

[root@vm ~]#

查看客户端工具的信息

[root@vm ~]# yum info mongo-10gen
Loaded plugins: fastestmirror
**(省略多行不重要的信息)**
Installed Packages
Name : mongo-10gen
Arch : x86_64
Version : 1.8.2
Release : mongodb_1
Size : 55 M
Repo : 10gen
Summary : mongo client shell and tools
URL : http://www.mongodb.org
License : AGPL 3.0
Description: Mongo (from “huMONGOus”) is a schema-free document-oriented
: database. It features dynamic profileable queries, full indexing,
: replication and fail-over support, efficient storage of large
: binary data objects, and auto-sharding.
:
: This package provides the mongo shell, import/export tools, and
: other client utilities.

[root@vm ~]#

二、安装MongoDB的服务器端和客户端工具

1.安装服务器端:

[root@vm ~]# yum install mongo-10gen-server
[root@vm ~]# ls /usr/bin/mongo(tab键)
mongo mongod mongodump mongoexport mongofiles mongoimport mongorestore mongos mongostat

———————————————–
这些就是MongoDB的程序文件

因为mongo-10gen-server包依赖于mongo-10gen,所以安装了服务器后就不需要单独安装客户端工具包mongo-10gen了

2.单独安装可客户端:

[root@vm ~]# yum install mongo-10gen
3.检查

[root@vm ~]# /etc/init.d/mongod
Usage: /etc/init.d/mongod {start|stop|status|restart|reload|force-reload|condrestart}
[root@vm ~]# /etc/init.d/mongod status
mongod (pid 1341) is running…
[root@vm ~]#

说明安后服务器端已经在运行了

4.服务器配置: /etc/mongod.conf

[root@vm ~]# cat /etc/mongod.conf
# mongo.conf

#where to log
logpath=/var/log/mongo/mongod.log

logappend=true #以追加方式写入日志

# fork and run in background
fork = true

#port = 27017 #端口

dbpath=/var/lib/mongo #数据库文件保存位置

# Enables periodic logging of CPU utilization and I/O wait
#启用定期记录CPU利用率和 I/O 等待
#cpu = true

# Turn on/off security.  Off is currently the default
# 是否以安全认证方式运行,默认是不认证的非安全方式
#noauth = true
#auth = true

# Verbose logging output.
# 详细记录输出
#verbose = true

# Inspect all client data for validity on receipt (useful for
# developing drivers)用于开发驱动程序时的检查客户端接收数据的有效性
#objcheck = true

# Enable db quota management 启用数据库配额管理,默认每个db可以有8个文件,可以用quotaFiles参数设置
#quota = true
# 设置oplog记录等级
# Set oplogging level where n is
#   0=off (default)
#   1=W
#   2=R
#   3=both
#   7=W+some reads
#oplog = 0

# Diagnostic/debugging option 动态调试项
#nocursors = true

# Ignore query hints 忽略查询提示
#nohints = true
# 禁用http界面,默认为localhost:28017
# Disable the HTTP interface (Defaults to localhost:27018).这个端口号写的是错的
#nohttpinterface = true

# 关闭服务器端脚本,这将极大的限制功能
# Turns off server-side scripting.  This will result in greatly limited
# functionality
#noscripting = true
# 关闭扫描表,任何查询将会是扫描失败
# Turns off table scans.  Any query that would do a table scan fails.
#notablescan = true
# 关闭数据文件预分配
# Disable data file preallocation.
#noprealloc = true
# 为新数据库指定.ns文件的大小,单位:MB
# Specify .ns file size for new databases.
# nssize = <size>

# Accout token for Mongo monitoring server.
#mms-token = <token>
# mongo监控服务器的名称
# Server name for Mongo monitoring server.
#mms-name = <server-name>
# mongo监控服务器的ping 间隔
# Ping interval for Mongo monitoring server.
#mms-interval = <seconds>

# Replication Options 复制选项

# in replicated mongo databases, specify here whether this is a slave or master 在复制中,指定当前是从属关系
#slave = true
#source = master.example.com
# Slave only: specify a single database to replicate
#only = master.example.com
# or
#master = true
#source = slave.example.com
[root@vm ~]#

以上是默认的配置文件中的一些参数,更多参数可以用 mongod -h 命令来查看

[root@vm ~]# mongod -h
Allowed options:

General options:
  -h [ --help ]          show this usage information
  --version              show version information
  -f [ --config ] arg    configuration file specifying additional options 指定启动配置文件路径
  -v [ --verbose ]       be more verbose (include multiple times for more
                         verbosity e.g. -vvvvv)
  --quiet                quieter output
  --port arg             specify port number 端口
  --bind_ip arg          comma separated list of ip addresses to listen on -
                         all local ips by default 绑定ip,可以多个
  --maxConns arg         max number of simultaneous connections 最大并发连接数
  --logpath arg          log file to send write to instead of stdout - has to
                         be a file, not directory 日志文件路径
  --logappend            append to logpath instead of over-writing 日志写入方式
  --pidfilepath arg      full path to pidfile (if not set, no pidfile is
                         created) pid文件路径
  --keyFile arg          private key for cluster authentication (only for
                         replica sets)集群认证私钥,仅适用于副本集
  --unixSocketPrefix arg alternative directory for UNIX domain sockets
                         (defaults to /tmp)替代目录
  --fork                 fork server process
  --auth                 run with security 使用认证方式运行
  --cpu                  periodically show cpu and iowait utilization 定期显示的CPU和IO等待利用率
  --dbpath arg           directory for datafiles 数据库文件路径
  --diaglog arg          0=off 1=W 2=R 3=both 7=W+some reads oplog记录等级
  --directoryperdb       each database will be stored in a separate directory
                         每个数据库存储到单独目录
  --journal              enable journaling 记录日志,建议开启,在异常宕机时可以恢复一些数据
  --journalOptions arg   journal diagnostic options
  --ipv6                 enable IPv6 support (disabled by default)
  --jsonp                allow JSONP access via http (has security
                         implications)允许JSONP通过http访问,该方式存在安全隐患
  --noauth               run without security 不带安全认证的方式
  --nohttpinterface      disable http interface 禁用http接口
  --noprealloc           disable data file preallocation - will often hurt
                         performance 禁用数据文件的预分配,往往会损害性能
  --noscripting          disable scripting engine 禁用脚本引擎
  --notablescan          do not allow table scans 不允许表扫描
  --nounixsocket         disable listening on unix sockets禁止unix sockets监听
  --nssize arg (=16)     .ns file size (in MB) for new databases 为新数据设置.ns文件的大小
  --objcheck             inspect client data for validity on receipt 检查在收到客户端的数据的有效性
  --profile arg          0=off 1=slow, 2=all
  --quota                limits each database to a certain number of files (8
                         default)启用数据库配额管理,默认每个db可以有8个文件,可以用quotaFiles参数设置
  --quotaFiles arg       number of files allower per db, requires --quota
  --rest                 turn on simple rest api 开启rest api
  --repair               run repair on all dbs 修复所有数据库
  --repairpath arg       root directory for repair files - defaults to dbpath修复文件的根目录,默
                         认为dbpath指定的目录
  --slowms arg (=100)    value of slow for profile and console log
  --smallfiles           use a smaller default file size
  --syncdelay arg (=60)  seconds between disk syncs (0=never, but not
                         recommended)与硬盘同步数据的时间,默认60秒,0表示不同步到硬盘(不建议)
  --sysinfo              print some diagnostic system information打印一些诊断系统信息
  --upgrade              upgrade db if needed 如果必要,将数据库文件升级到新的格式
                        (<=1.0到1.1+升级时所需的)

Replication options:    复制选项
  --fastsync            indicate that this instance is starting from a dbpath
                        snapshot of the repl peer 从一个dbpath快照开始同步
  --autoresync          automatically resync if slave data is stale 自动同步,如果从机的数据不是新的
                        自动同步
  --oplogSize arg       size limit (in MB) for op log oplog的大小

Master/slave options:   主/从配置选项
  --master              master mode 主模式
  --slave               slave mode  从属模式
  --source arg          when slave: specify master as <server:port>从属服务器上指定主服务器地址
  --only arg            when slave: specify a single database to replicate从属服务器上指定要复制的
                        数据库
  --slavedelay arg      specify delay (in seconds) to be used when applying
                        master ops to slave 指定从主服务器上同步数据的时间间隔 单位秒

Replica set options:    副本集选项
  --replSet arg         arg is <setname>[/<optionalseedhostlist>]
                        参数:<名称>[<种子主机列表>]

Sharding options:       分片设置选项
  --configsvr           declare this is a config db of a cluster; default port
                        27019; default dir /data/configdb 声明这是一个集群的配置数据库,
                        默认的端口是27019 默认的路径是/data/configdb
  --shardsvr            declare this is a shard db of a cluster; default port
                        27018 声明这是集群的一个分片数据库,默认端口为27018
  --noMoveParanoia      turn off paranoid saving of data for moveChunk.  this
                        is on by default for now, but default will switch
                        关闭偏着保存大块数据。现在它是默认的,但是会变换

[root@vm ~]#
作者:AngryFox 分类: Uncategorized August 4th, 2012 暂无评论
<?php
/*** Mongodb类** examples:
* $mongo = new HMongodb("127.0.0.1:11223");
* $mongo->selectDb("test_db");
* 创建索引
* $mongo->ensureIndex("test_table", array("id"=>1), array('unique'=>true));
* 获取表的记录
* $mongo->count("test_table");
* 插入记录
* $mongo->insert("test_table", array("id"=>2, "title"=>"asdqw"));
* 更新记录
* $mongo->update("test_table", array("id"=>1),array("id"=>1,"title"=>"bbb"));
* 更新记录-存在时更新,不存在时添加-相当于set
* $mongo->update("test_table", array("id"=>1),array("id"=>1,"title"=>"bbb"),array("upsert"=>1));
* 查找记录
* $mongo->find("c", array("title"=>"asdqw"), array("start"=>2,"limit"=>2,"sort"=>array("id"=>1)))
* 查找一条记录
* $mongo->findOne("$mongo->findOne("ttt", array("id"=>1))", array("id"=>1));
* 删除记录
* $mongo->remove("ttt", array("title"=>"bbb"));
* 仅删除一条记录
* $mongo->remove("ttt", array("title"=>"bbb"), array("justOne"=>1));
* 获取Mongo操作的错误信息
* $mongo->getError();
*/  

class HMongodb {  

    //Mongodb连接
    var $mongo;  

    var $curr_db_name;
    var $curr_table_name;
    var $error;  

    /**
    * 构造函数
    * 支持传入多个mongo_server(1.一个出问题时连接其它的server 2.自动将查询均匀分发到不同server)
    *
    * 参数:
    * $mongo_server:数组或字符串-array("127.0.0.1:1111", "127.0.0.1:2222")-"127.0.0.1:1111"
    * $connect:初始化mongo对象时是否连接,默认连接
    * $auto_balance:是否自动做负载均衡,默认是
    *
    * 返回值:
    * 成功:mongo object
    * 失败:false
    */
    function __construct($mongo_server, $connect=true, $auto_balance=true)
    {
        if (is_array($mongo_server))
        {
            $mongo_server_num = count($mongo_server);
            if ($mongo_server_num > 1 && $auto_balance)
            {
                $prior_server_num = rand(1, $mongo_server_num);
                $rand_keys = array_rand($mongo_server,$mongo_server_num);
                $mongo_server_str = $mongo_server[$prior_server_num-1];
                foreach ($rand_keys as $key)
                {
                    if ($key != $prior_server_num - 1)
                    {
                        $mongo_server_str .= ',' . $mongo_server[$key];
                    }
                }
            }
            else
            {
                $mongo_server_str = implode(',', $mongo_server);
            }                  }
        else
        {
            $mongo_server_str = $mongo_server;
        }
        try {
            $this->mongo = new Mongo($mongo_server, array('connect'=>$connect));
        }
        catch (MongoConnectionException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }
    }  

    function getInstance($mongo_server, $flag=array())
    {
        static $mongodb_arr;
        if (emptyempty($flag['tag']))
        {
            $flag['tag'] = 'default';          }
        if (isset($flag['force']) && $flag['force'] == true)
        {
            $mongo = new HMongodb($mongo_server);
            if (emptyempty($mongodb_arr[$flag['tag']]))
            {
                $mongodb_arr[$flag['tag']] = $mongo;
            }
            return $mongo;
        }
        else if (isset($mongodb_arr[$flag['tag']]) && is_resource($mongodb_arr[$flag['tag']]))
        {
            return $mongodb_arr[$flag['tag']];
        }
        else
        {
            $mongo = new HMongodb($mongo_server);
            $mongodb_arr[$flag['tag']] = $mongo;
            return $mongo;                  }          }  

    /**
    * 连接mongodb server
    *
    * 参数:无
    *
    * 返回值:
    * 成功:true
    * 失败:false
    */
    function connect()
    {
        try {
            $this->mongo->connect();
            return true;
        }
        catch (MongoConnectionException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }
    }  

    /**
    * select db
    *
    * 参数:$dbname
    *
    * 返回值:无
    */
    function selectDb($dbname)
    {
        $this->curr_db_name = $dbname;
    }  

    /**
    * 创建索引:如索引已存在,则返回。
    *
    * 参数:
    * $table_name:表名
    * $index:索引-array("id"=>1)-在id字段建立升序索引
    * $index_param:其它条件-是否唯一索引等
    *
    * 返回值:
    * 成功:true
    * 失败:false
    */
    function ensureIndex($table_name, $index, $index_param=array())
    {
        $dbname = $this->curr_db_name;
        $index_param['safe'] = 1;
        try {
            $this->mongo->$dbname->$table_name->ensureIndex($index, $index_param);
            return true;
        }
        catch (MongoCursorException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }
    }  

    /**
    * 插入记录
    *
    * 参数:
    * $table_name:表名
    * $record:记录
    *
    * 返回值:
    * 成功:true
    * 失败:false
    */
    function insert($table_name, $record)
    {
        $dbname = $this->curr_db_name;
        try {
            $this->mongo->$dbname->$table_name->insert($record, array('safe'=>true));
            return true;
        }
        catch (MongoCursorException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }
    }  

    /**
    * 查询表的记录数
    *
    * 参数:
    * $table_name:表名
    *
    * 返回值:表的记录数
    */
    function count($table_name)
    {
        $dbname = $this->curr_db_name;
        return $this->mongo->$dbname->$table_name->count();
    }  

    /**
    * 更新记录
    *
    * 参数:
    * $table_name:表名
    * $condition:更新条件
    * $newdata:新的数据记录
    * $options:更新选择-upsert/multiple
    *
    * 返回值:
    * 成功:true
    * 失败:false
    */
    function update($table_name, $condition, $newdata, $options=array())
    {
        $dbname = $this->curr_db_name;
        $options['safe'] = 1;
        if (!isset($options['multiple']))
        {
            $options['multiple'] = 0;          }
        try {
            $this->mongo->$dbname->$table_name->update($condition, $newdata, $options);
            return true;
        }
        catch (MongoCursorException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }          }  

    /**
    * 删除记录
    *
    * 参数:
    * $table_name:表名
    * $condition:删除条件
    * $options:删除选择-justOne
    *
    * 返回值:
    * 成功:true
    * 失败:false
    */
    function remove($table_name, $condition, $options=array())
    {
        $dbname = $this->curr_db_name;
        $options['safe'] = 1;
        try {
            $this->mongo->$dbname->$table_name->remove($condition, $options);
            return true;
        }
        catch (MongoCursorException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }          }  

    /**
    * 查找记录
    *
    * 参数:
    * $table_name:表名
    * $query_condition:字段查找条件
    * $result_condition:查询结果限制条件-limit/sort等
    * $fields:获取字段
    *
    * 返回值:
    * 成功:记录集
    * 失败:false
    */
    function find($table_name, $query_condition, $result_condition=array(), $fields=array())
    {
        $dbname = $this->curr_db_name;
        $cursor = $this->mongo->$dbname->$table_name->find($query_condition, $fields);
        if (!emptyempty($result_condition['start']))
        {
            $cursor->skip($result_condition['start']);
        }
        if (!emptyempty($result_condition['limit']))
        {
            $cursor->limit($result_condition['limit']);
        }
        if (!emptyempty($result_condition['sort']))
        {
            $cursor->sort($result_condition['sort']);
        }
        $result = array();
        try {
            while ($cursor->hasNext())
            {
                $result[] = $cursor->getNext();
            }
        }
        catch (MongoConnectionException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }
        catch (MongoCursorTimeoutException $e)
        {
            $this->error = $e->getMessage();
            return false;
        }
        return $result;
    }  

    /**
    * 查找一条记录
    *
    * 参数:
    * $table_name:表名
    * $condition:查找条件
    * $fields:获取字段
    *
    * 返回值:
    * 成功:一条记录
    * 失败:false
    */
    function findOne($table_name, $condition, $fields=array())
    {
        $dbname = $this->curr_db_name;
        return $this->mongo->$dbname->$table_name->findOne($condition, $fields);
    }  

    /**
    * 获取当前错误信息
    *
    * 参数:无
    *
    * 返回值:当前错误信息
    */
    function getError()
    {
        return $this->error;
    }
}  

?>
作者:AngryFox 分类: Uncategorized August 4th, 2012 暂无评论

phpredis是php的一个扩展,效率是相当高有链表排序功能,对创建内存级的模块业务关系
很有用;以下是redis官方提供的命令使用技巧:
下载地址如下:

https://github.com/owlient/phpredis(支持redis 2.0.4)
Redis::__construct构造函数
$redis = new Redis();

connect, open 链接redis服务
参数
host: string,服务地址
port: int,端口号
timeout: float,链接时长 (可选, 默认为 0 ,不限链接时间)
注: 在redis.conf中也有时间,默认为300

pconnect, popen 不会主动关闭的链接
参考上面

setOption 设置redis模式

getOption 查看redis设置的模式

ping 查看连接状态

get 得到某个key的值(string值)
如果该key不存在,return false

set 写入key 和 value(string值)
如果写入成功,return ture

setex 带生存时间的写入值
$redis->setex(‘key’, 3600, ‘value’); // sets key → value, with 1h TTL.

setnx 判断是否重复的,写入值
$redis->setnx(‘key’, ‘value’);
$redis->setnx(‘key’, ‘value’);

delete 删除指定key的值
返回已经删除key的个数(长整数)
$redis->delete(‘key1′, ‘key2′);
$redis->delete(array(‘key3′, ‘key4′, ‘key5′));

ttl
得到一个key的生存时间

persist
移除生存时间到期的key
如果key到期 true 如果不到期 false

mset (redis版本1.1以上才可以用)
同时给多个key赋值
$redis->mset(array(‘key0′ => ‘value0′, ‘key1′ => ‘value1′));

multi, exec, discard
进入或者退出事务模式
参数可选Redis::MULTI或Redis::PIPELINE. 默认是 Redis::MULTI
Redis::MULTI:将多个操作当成一个事务执行
Redis::PIPELINE:让(多条)执行命令简单的,更加快速的发送给服务器,但是没有任何原子性的保证
discard:删除一个事务
返回值
multi(),返回一个redis对象,并进入multi-mode模式,一旦进入multi-mode模式,以后调用的所有方法都会返回相同的对象,只到exec()方法被调用。

watch, unwatch (代码测试后,不能达到所说的效果)
监测一个key的值是否被其它的程序更改。如果这个key在watch 和 exec (方法)间被修改,这个 MULTI/EXEC 事务的执行将失败(return false)
unwatch 取消被这个程序监测的所有key
参数,一对key的列表
$redis->watch(‘x’);

$ret = $redis->multi() ->incr(‘x’) ->exec();

subscribe *
方法回调。注意,该方法可能在未来里发生改变

publish *
发表内容到某一个通道。注意,该方法可能在未来里发生改变

exists
判断key是否存在。存在 true 不在 false

incr, incrBy
key中的值进行自增1,如果填写了第二个参数,者自增第二个参数所填的值
$redis->incr(‘key1′);
$redis->incrBy(‘key1′, 10);

decr, decrBy
做减法,使用方法同incr

getMultiple
传参
由key组成的数组
返回参数
如果key存在返回value,不存在返回false
$redis->set(‘key1′, ‘value1′); $redis->set(‘key2′, ‘value2′); $redis->set(‘key3′, ‘value3′); $redis->getMultiple(array(‘key1′, ‘key2′, ‘key3′));
$redis->lRem(‘key1′, ‘A’, 2);
$redis->lRange(‘key1′, 0, -1);

list相关操作
lPush
$redis->lPush(key, value);
在名称为key的list左边(头)添加一个值为value的 元素

rPush
$redis->rPush(key, value);
在名称为key的list右边(尾)添加一个值为value的 元素

lPushx/rPushx
$redis->lPushx(key, value);
在名称为key的list左边(头)/右边(尾)添加一个值为value的元素,如果value已经存在,则不添加

lPop/rPop
$redis->lPop(‘key’);
输出名称为key的list左(头)起/右(尾)起的第一个元素,删除该元素

blPop/brPop
$redis->blPop(‘key1′, ‘key2′, 10);
lpop命令的block版本。即当timeout为0时,若遇到名称为key i的list不存在或该list为空,则命令结束。如果timeout>0,则遇到上述情况时,等待timeout秒,如果问题没有解决,则对keyi+1开始的list执行pop操作

lSize
$redis->lSize(‘key’);
返回名称为key的list有多少个元素

lIndex, lGet
$redis->lGet(‘key’, 0);
返回名称为key的list中index位置的元素

lSet
$redis->lSet(‘key’, 0, ‘X’);
给名称为key的list中index位置的元素赋值为value

lRange, lGetRange
$redis->lRange(‘key1′, 0, -1);
返回名称为key的list中start至end之间的元素(end为 -1 ,返回所有)

lTrim, listTrim
$redis->lTrim(‘key’, start, end);
截取名称为key的list,保留start至end之间的元素

lRem, lRemove
$redis->lRem(‘key’, ‘A’, 2);
删除count个名称为key的list中值为value的元素。count为0,删除所有值为value的元素,count>0从头至尾删除count个值为value的元素,count<0从尾到头删除|count|个值为value的元素

lInsert
在名称为为key的list中,找到值为pivot 的value,并根据参数Redis::BEFORE | Redis::AFTER,来确定,newvalue 是放在 pivot 的前面,或者后面。如果key不存在,不会插入,如果 pivot不存在,return -1
$redis->delete(‘key1′); $redis->lInsert(‘key1′, Redis::AFTER, ‘A’, ‘X’); $redis->lPush(‘key1′, ‘A’); $redis->lPush(‘key1′, ‘B’); $redis->lPush(‘key1′, ‘C’); $redis->lInsert(‘key1′, Redis::BEFORE, ‘C’, ‘X’);
$redis->lRange(‘key1′, 0, -1);
$redis->lInsert(‘key1′, Redis::AFTER, ‘C’, ‘Y’);
$redis->lRange(‘key1′, 0, -1);
$redis->lInsert(‘key1′, Redis::AFTER, ‘W’, ‘value’);

rpoplpush
返回并删除名称为srckey的list的尾元素,并将该元素添加到名称为dstkey的list的头部
$redis->delete(‘x’, ‘y’);
$redis->lPush(‘x’, ‘abc’); $redis->lPush(‘x’, ‘def’); $redis->lPush(‘y’, ’123′); $redis->lPush(‘y’, ’456′); // move the last of x to the front of y. var_dump($redis->rpoplpush(‘x’, ‘y’));
var_dump($redis->lRange(‘x’, 0, -1));
var_dump($redis->lRange(‘y’, 0, -1));

string(3) “abc”
array(1) { [0]=> string(3) “def” }
array(3) { [0]=> string(3) “abc” [1]=> string(3) “456″ [2]=> string(3) “123″ }

SET操作相关
sAdd
向名称为key的set中添加元素value,如果value存在,不写入,return false
$redis->sAdd(key , value);

sRem, sRemove
删除名称为key的set中的元素value
$redis->sAdd(‘key1′ , ‘set1′);
$redis->sAdd(‘key1′ , ‘set2′);
$redis->sAdd(‘key1′ , ‘set3′);
$redis->sRem(‘key1′, ‘set2′);

sMove
将value元素从名称为srckey的集合移到名称为dstkey的集合
$redis->sMove(seckey, dstkey, value);

sIsMember, sContains
名称为key的集合中查找是否有value元素,有ture 没有 false
$redis->sIsMember(key, value);

sCard, sSize
返回名称为key的set的元素个数

sPop
随机返回并删除名称为key的set中一个元素

sRandMember
随机返回名称为key的set中一个元素,不删除

sInter
求交集

sInterStore
求交集并将交集保存到output的集合
$redis->sInterStore(‘output’, ‘key1′, ‘key2′, ‘key3′)

sUnion
求并集
$redis->sUnion(‘s0′, ‘s1′, ‘s2′);
s0,s1,s2 同时求并集

sUnionStore
求并集并将并集保存到output的集合
$redis->sUnionStore(‘output’, ‘key1′, ‘key2′, ‘key3′);

sDiff
求差集

sDiffStore
求差集并将差集保存到output的集合

sMembers, sGetMembers
返回名称为key的set的所有元素

sort
排序,分页等
参数
‘by’ => ‘some_pattern_*’,
‘limit’ => array(0, 1),
‘get’ => ‘some_other_pattern_*’ or an array of patterns,
‘sort’ => ‘asc’ or ‘desc’,
‘alpha’ => TRUE,
‘store’ => ‘external-key’
例子
$redis->delete(‘s’); $redis->sadd(‘s’, 5); $redis->sadd(‘s’, 4); $redis->sadd(‘s’, 2); $redis->sadd(‘s’, 1); $redis->sadd(‘s’, 3);
var_dump($redis->sort(‘s’)); // 1,2,3,4,5
var_dump($redis->sort(‘s’, array(‘sort’ => ‘desc’))); // 5,4,3,2,1
var_dump($redis->sort(‘s’, array(‘sort’ => ‘desc’, ‘store’ => ‘out’))); // (int)5

string命令
getSet
返回原来key中的值,并将value写入key
$redis->set(‘x’, ’42′);
$exValue = $redis->getSet(‘x’, ‘lol’); // return ’42′, replaces x by ‘lol’
$newValue = $redis->get(‘x’)’ // return ‘lol’

append
string,名称为key的string的值在后面加上value
$redis->set(‘key’, ‘value1′);
$redis->append(‘key’, ‘value2′);
$redis->get(‘key’);

getRange (方法不存在)
返回名称为key的string中start至end之间的字符
$redis->set(‘key’, ‘string value’);
$redis->getRange(‘key’, 0, 5);
$redis->getRange(‘key’, -5, -1);

setRange (方法不存在)
改变key的string中start至end之间的字符为value
$redis->set(‘key’, ‘Hello world’);
$redis->setRange(‘key’, 6, “redis”);
$redis->get(‘key’);

strlen
得到key的string的长度
$redis->strlen(‘key’);

getBit/setBit
返回2进制信息

zset(sorted set)操作相关
zAdd(key, score, member):向名称为key的zset中添加元素member,score用于排序。如果该元素已经存在,则根据score更新该元素的顺序。
$redis->zAdd(‘key’, 1, ‘val1′);
$redis->zAdd(‘key’, 0, ‘val0′);
$redis->zAdd(‘key’, 5, ‘val5′);
$redis->zRange(‘key’, 0, -1); // array(val0, val1, val5)

zRange(key, start, end,withscores):返回名称为key的zset(元素已按score从小到大排序)中的index从start到end的所有元素
$redis->zAdd(‘key1′, 0, ‘val0′);
$redis->zAdd(‘key1′, 2, ‘val2′);
$redis->zAdd(‘key1′, 10, ‘val10′);
$redis->zRange(‘key1′, 0, -1); // with scores $redis->zRange(‘key1′, 0, -1, true);

zDelete, zRem
zRem(key, member) :删除名称为key的zset中的元素member
$redis->zAdd(‘key’, 0, ‘val0′);
$redis->zAdd(‘key’, 2, ‘val2′);
$redis->zAdd(‘key’, 10, ‘val10′);
$redis->zDelete(‘key’, ‘val2′);
$redis->zRange(‘key’, 0, -1);

zRevRange(key, start, end,withscores):返回名称为key的zset(元素已按score从大到小排序)中的index从start到end的所有元素.withscores: 是否输出socre的值,默认false,不输出
$redis->zAdd(‘key’, 0, ‘val0′);
$redis->zAdd(‘key’, 2, ‘val2′);
$redis->zAdd(‘key’, 10, ‘val10′);
$redis->zRevRange(‘key’, 0, -1); // with scores $redis->zRevRange(‘key’, 0, -1, true);

zRangeByScore, zRevRangeByScore
$redis->zRangeByScore(key, star, end, array(withscores, limit ));
返回名称为key的zset中score >= star且score <= end的所有元素

zCount
$redis->zCount(key, star, end);
返回名称为key的zset中score >= star且score <= end的所有元素的个数

zRemRangeByScore, zDeleteRangeByScore
$redis->zRemRangeByScore(‘key’, star, end);
删除名称为key的zset中score >= star且score <= end的所有元素,返回删除个数

zSize, zCard
返回名称为key的zset的所有元素的个数

zScore
$redis->zScore(key, val2);
返回名称为key的zset中元素val2的score

zRank, zRevRank
$redis->zRevRank(key, val);
返回名称为key的zset(元素已按score从小到大排序)中val元素的rank(即index,从0开始),若没有val元素,返回“null”。zRevRank 是从大到小排序

zIncrBy
$redis->zIncrBy(‘key’, increment, ‘member’);
如果在名称为key的zset中已经存在元素member,则该元素的score增加increment;否则向集合中添加该元素,其score的值为increment

zUnion/zInter
参数
keyOutput
arrayZSetKeys
arrayWeights
aggregateFunction Either “SUM”, “MIN”, or “MAX”: defines the behaviour to use on duplicate entries during the zUnion.
对N个zset求并集和交集,并将最后的集合保存在dstkeyN中。对于集合中每一个元素的score,在进行AGGREGATE运算前,都要乘以对于的WEIGHT参数。如果没有提供WEIGHT,默认为1。默认的AGGREGATE是SUM,即结果集合中元素的score是所有集合对应元素进行SUM运算的值,而MIN和MAX是指,结果集合中元素的score是所有集合对应元素中最小值和最大值。

Hash操作
hSet
$redis->hSet(‘h’, ‘key1′, ‘hello’);
向名称为h的hash中添加元素key1—>hello

hGet
$redis->hGet(‘h’, ‘key1′);
返回名称为h的hash中key1对应的value(hello)

hLen
$redis->hLen(‘h’);
返回名称为h的hash中元素个数

hDel
$redis->hDel(‘h’, ‘key1′);
删除名称为h的hash中键为key1的域

hKeys
$redis->hKeys(‘h’);
返回名称为key的hash中所有键

hVals
$redis->hVals(‘h’)
返回名称为h的hash中所有键对应的value

hGetAll
$redis->hGetAll(‘h’);
返回名称为h的hash中所有的键(field)及其对应的value

hExists
$redis->hExists(‘h’, ‘a’);
名称为h的hash中是否存在键名字为a的域

hIncrBy
$redis->hIncrBy(‘h’, ‘x’, 2);
将名称为h的hash中x的value增加2

hMset
$redis->hMset(‘user:1′, array(‘name’ => ‘Joe’, ‘salary’ => 2000));
向名称为key的hash中批量添加元素

hMGet
$redis->hmGet(‘h’, array(‘field1′, ‘field2′));
返回名称为h的hash中field1,field2对应的value

redis 操作相关
flushDB
清空当前数据库

flushAll
清空所有数据库

randomKey
随机返回key空间的一个key
$key = $redis->randomKey();

select
选择一个数据库
move
转移一个key到另外一个数据库
$redis->select(0); // switch to DB 0
$redis->set(‘x’, ’42′); // write 42 to x
$redis->move(‘x’, 1); // move to DB 1
$redis->select(1); // switch to DB 1
$redis->get(‘x’); // will return 42

rename, renameKey
给key重命名
$redis->set(‘x’, ’42′);
$redis->rename(‘x’, ‘y’);
$redis->get(‘y’); // → 42
$redis->get(‘x’); // → `FALSE`

renameNx
与remane类似,但是,如果重新命名的名字已经存在,不会替换成功

setTimeout, expire
设定一个key的活动时间(s)
$redis->setTimeout(‘x’, 3);

expireAt
key存活到一个unix时间戳时间
$redis->expireAt(‘x’, time() + 3);

keys, getKeys
返回满足给定pattern的所有key
$keyWithUserPrefix = $redis->keys(‘user*’);

dbSize
查看现在数据库有多少key
$count = $redis->dbSize();

auth
密码认证
$redis->auth(‘foobared’);

bgrewriteaof
使用aof来进行数据库持久化
$redis->bgrewriteaof();

slaveof
选择从服务器
$redis->slaveof(’10.0.1.7′, 6379);

save
将数据同步保存到磁盘

bgsave
将数据异步保存到磁盘

lastSave
返回上次成功将数据保存到磁盘的Unix时戳

info
返回redis的版本信息等详情

type
返回key的类型值
string: Redis::REDIS_STRING
set: Redis::REDIS_SET
list: Redis::REDIS_LIST
zset: Redis::REDIS_ZSET
hash: Redis::REDIS_HASH
other: Redis::REDIS_NOT_FOUND

作者:AngryFox 分类: Uncategorized August 4th, 2012 暂无评论

1、redis-benchmark
redis基准信息,redis服务器性能检测

redis-benchmark -h localhost -p 6379 -c 100 -n 100000
100个并发连接,100000个请求,检测host为localhost 端口为6379的redis服务器性能

[root@Architect redis-1.2.6]# redis-benchmark -h localhost -p 6379 -c 100 -n 100000
====== PING ======
10001 requests completed in 0.41 seconds
50 parallel clients
3 bytes payload
keep alive: 1

0.01% <= 0 milliseconds
23.09% <= 1 milliseconds
85.82% <= 2 milliseconds
95.60% <= 3 milliseconds
97.20% <= 4 milliseconds
97.96% <= 5 milliseconds
98.83% <= 6 milliseconds
99.41% <= 7 milliseconds
99.70% <= 8 milliseconds
99.99% <= 9 milliseconds
100.00% <= 12 milliseconds
24274.27 requests per second

2、redis-cli

redis-cli -h localhost -p 6380 monitor
Dump all the received requests in real time;
监控host为localhost,端口为6380,redis的连接及读写操作

[root@Architect redis-1.2.6]# redis-cli -h localhost -p 6380 monitor
+OK
+1289800615.808225 “monitor”
+1289800615.839079 “GET” “name”
+1289800615.853694 “PING”
+1289800615.853783 “PING”
+1289800615.854646 “PING”
+1289800615.854974 “PING”
+1289800615.857693 “PING”
+1289800615.866862 “PING”
+1289800615.871944 “PING”

redis-cli -h localhost -p 6380 info
Provide information and statistics about the server ;
提供host为localhost,端口为6380,redis服务的统计信息

[root@Architect redis-1.2.6]# redis-cli -h localhost -p 6380 info
redis_version:2.0.4
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:32
multiplexing_api:epoll
process_id:21990
uptime_in_seconds:490580
uptime_in_days:5
connected_clients:103
connected_slaves:0
blocked_clients:0
used_memory:4453240
used_memory_human:4.25M
changes_since_last_save:200
bgsave_in_progress:0
last_save_time:1290394640
bgrewriteaof_in_progress:0
total_connections_received:809
total_commands_processed:44094018
expired_keys:0
hash_max_zipmap_entries:64
hash_max_zipmap_value:512
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:slave
master_host:localhost
master_port:6379
master_link_status:up
master_last_io_seconds_ago:18
db0:keys=1319,expires=0

3、redis-stat

redis-stat host localhost port 6380 overview
Print general information about a Redis instance;
实时打印出host为localhost,端口为6380,redis实例的总体信息

[root@Architect redis-1.2.6]# redis-stat port 6380 overview
——- data —— ———— load —————————– – childs –
keys used-mem clients requests connections
1319 5.37M 103 44108021 (+44108021) 810
1319 5.38M 103 44108124 (+103) 810
1319 5.38M 103 44108225 (+101) 810
1319 5.39M 103 44108326 (+101) 810
1319 5.40M 103 44108427 (+101) 810
1319 5.41M 103 44108528 (+101) 810

redis-stat host localhost port 6380 overview
Measure Redis server latency;
输出host为localhost,端口为6380,redis服务中每个请求的响应时长

[root@Architect redis-1.2.6]# redis-stat port 6380 latency
1: 0.16 ms
2: 0.11 ms
3: 0.15 ms
4: 0.11 ms
5: 0.18 ms
6: 0.14 ms

作者:AngryFox 分类: Uncategorized August 4th, 2012 暂无评论

安装redis出现
tclsh8.5 not found 解决方式
这玩意的官方网站

http://www.linuxfromscratch.org/blfs/view/cvs/general/tcl.html

解决办法

cd /opt
wget http://downloads.sourceforge.net/tcl/tcl8.5.11-src.tar.gz
或者
wget ftp://mirror.ovh.net/gentoo-distfiles/distfiles/tcl8.5.11-src.tar.gz

tar xvfz tcl8.5.11-src.tar.gz
cd tcl8.5.11/unix
./configure
make
make install

安装Redis

下载最新的

官网:http://redis.io/ 或者 http://code.google.com/p/redis/downloads/list

第一步:下载安装编译

#wget http://redis.googlecode.com/files/redis-2.4.4.tar.gz
#tar zxvf redis-2.4.4.tar.gz
#cd redis-2.4.4
#make
#make install
#cp redis.conf /etc/

第二步:修改配置
#vi /etc/redis.conf

配置见附录

第三步:启动进程

#redis-server /etc/redis.conf
查看进程有没有成功启动
#ps -ef | grep redis
测试输入一个键值
#redis-cli set test “123456″
获取键值
#redis-cli get test

关闭redis
# redis-cli shutdown //关闭所有
关闭某个端口上的redis
# redis-cli -p 6397 shutdown //关闭6397端口的redis

说明:关闭以后缓存数据会自动dump到硬盘上,硬盘地址见redis.conf中的dbfilename dump.rdb

PHP扩展

http://code.google.com/p/php-redis/

附录:无错配置
配置的全文件

 view plaincopy

    # Redis configuration file example  

    # Note on units: when memory size is needed, it is possible to specifiy
    # it in the usual form of 1k 5GB 4M and so forth:
    #
    # 1k => 1000 bytes
    # 1kb => 1024 bytes
    # 1m => 1000000 bytes
    # 1mb => 1024*1024 bytes
    # 1g => 1000000000 bytes
    # 1gb => 1024*1024*1024 bytes
    #
    # units are case insensitive so 1GB 1Gb 1gB are all the same.  

    # By default Redis does not run as a daemon. Use 'yes' if you need it.
    # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
    daemonize yes  

    # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
    # default. You can specify a custom pid file location here.
    pidfile /var/run/redis.pid  

    # Accept connections on the specified port, default is 6379.
    # If port 0 is specified Redis will not listen on a TCP socket.
    port 6379  

    # If you want you can bind a single interface, if the bind option is not
    # specified all the interfaces will listen for incoming connections.
    #
     bind 127.0.0.1  

    # Specify the path for the unix socket that will be used to listen for
    # incoming connections. There is no default, so Redis will not listen
    # on a unix socket when not specified.
    #
    # unixsocket /tmp/redis.sock
    # unixsocketperm 755  

    # Close the connection after a client is idle for N seconds (0 to disable)
    timeout 600  

    # Set server verbosity to 'debug'
    # it can be one of:
    # debug (a lot of information, useful for development/testing)
    # verbose (many rarely useful info, but not a mess like the debug level)
    # notice (moderately verbose, what you want in production probably)
    # warning (only very important / critical messages are logged)
    loglevel verbose  

    # Specify the log file name. Also 'stdout' can be used to force
    # Redis to log on the standard output. Note that if you use standard
    # output for logging but daemonize, logs will be sent to /dev/null
    logfile stdout  

    # To enable logging to the system logger, just set 'syslog-enabled' to yes,
    # and optionally update the other syslog parameters to suit your needs.
    # syslog-enabled no  

    # Specify the syslog identity.
    # syslog-ident redis  

    # Specify the syslog facility.  Must be USER or between LOCAL0-LOCAL7.
    # syslog-facility local0  

    # Set the number of databases. The default database is DB 0, you can select
    # a different one on a per-connection basis using SELECT <dbid> where
    # dbid is a number between 0 and 'databases'-1
    databases 16  

    ################################ SNAPSHOTTING  #################################
    #
    # Save the DB on disk:
    #
    #   save <seconds> <changes>
    #
    #   Will save the DB if both the given number of seconds and the given
    #   number of write operations against the DB occurred.
    #
    #   In the example below the behaviour will be to save:
    #   after 900 sec (15 min) if at least 1 key changed
    #   after 300 sec (5 min) if at least 10 keys changed
    #   after 60 sec if at least 10000 keys changed
    #
    #   Note: you can disable saving at all commenting all the "save" lines.  

    save 900 1
    save 300 10
    save 60 10000  

    # Compress string objects using LZF when dump .rdb databases?
    # For default that's set to 'yes' as it's almost always a win.
    # If you want to save some CPU in the saving child set it to 'no' but
    # the dataset will likely be bigger if you have compressible values or keys.
    rdbcompression yes  

    # The filename where to dump the DB
    dbfilename dump.rdb  

    # The working directory.
    #
    # The DB will be written inside this directory, with the filename specified
    # above using the 'dbfilename' configuration directive.
    #
    # Also the Append Only File will be created inside this directory.
    #
    # Note that you must specify a directory here, not a file name.
    dir /usr/local/redis-2.4.4  

1 view plaincopy

    ################################# REPLICATION #################################  

    # Master-Slave replication. Use slaveof to make a Redis instance a copy of
    # another Redis server. Note that the configuration is local to the slave
    # so for example it is possible to configure the slave to save the DB with a
    # different interval, or to listen to another port, and so on.
    #
    # slaveof <masterip> <masterport>  

    # If the master is password protected (using the "requirepass" configuration
    # directive below) it is possible to tell the slave to authenticate before
    # starting the replication synchronization process, otherwise the master will
    # refuse the slave request.
    #
    # masterauth <master-password>  

    # When a slave lost the connection with the master, or when the replication
    # is still in progress, the slave can act in two different ways:
    #
    # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
    #    still reply to client requests, possibly with out of data data, or the
    #    data set may just be empty if this is the first synchronization.
    #
    # 2) if slave-serve-stale data is set to 'no' the slave will reply with
    #    an error "SYNC with master in progress" to all the kind of commands
    #    but to INFO and SLAVEOF.
    #
    slave-serve-stale-data yes  

    ################################## SECURITY ###################################  

    # Require clients to issue AUTH <PASSWORD> before processing any other
    # commands.  This might be useful in environments in which you do not trust
    # others with access to the host running redis-server.
    #
    # This should stay commented out for backward compatibility and because most
    # people do not need auth (e.g. they run their own servers).
    #
    # Warning: since Redis is pretty fast an outside user can try up to
    # 150k passwords per second against a good box. This means that you should
    # use a very strong password otherwise it will be very easy to break.
    #
    # requirepass foobared  

    # Command renaming.
    #
    # It is possilbe to change the name of dangerous commands in a shared
    # environment. For instance the CONFIG command may be renamed into something
    # of hard to guess so that it will be still available for internal-use
    # tools but not available for general clients.
    #
    # Example:
    #
    # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
    #
    # It is also possilbe to completely kill a command renaming it into
    # an empty string:
    #
    # rename-command CONFIG ""  

    ################################### LIMITS ####################################  

    # Set the max number of connected clients at the same time. By default there
    # is no limit, and it's up to the number of file descriptors the Redis process
    # is able to open. The special value '0' means no limits.
    # Once the limit is reached Redis will close all the new connections sending
    # an error 'max number of clients reached'.
    #
    # maxclients 128  

    # Don't use more memory than the specified amount of bytes.
    # When the memory limit is reached Redis will try to remove keys with an
    # EXPIRE set. It will try to start freeing keys that are going to expire
    # in little time and preserve keys with a longer time to live.
    # Redis will also try to remove objects from free lists if possible.
    #
    # If all this fails, Redis will start to reply with errors to commands
    # that will use more memory, like SET, LPUSH, and so on, and will continue
    # to reply to most read-only commands like GET.
    #
    # WARNING: maxmemory can be a good idea mainly if you want to use Redis as a
    # 'state' server or cache, not as a real DB. When Redis is used as a real
    # database the memory usage will grow over the weeks, it will be obvious if
    # it is going to use too much memory in the long run, and you'll have the time
    # to upgrade. With maxmemory after the limit is reached you'll start to get
    # errors for write operations, and this may even lead to DB inconsistency.
    #
    # maxmemory <bytes>  

    # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
    # is reached? You can select among five behavior:
    #
    # volatile-lru -> remove the key with an expire set using an LRU algorithm
    # allkeys-lru -> remove any key accordingly to the LRU algorithm
    # volatile-random -> remove a random key with an expire set
    # allkeys->random -> remove a random key, any key
    # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
    # noeviction -> don't expire at all, just return an error on write operations
    #
    # Note: with all the kind of policies, Redis will return an error on write
    #       operations, when there are not suitable keys for eviction.
    #
    #       At the date of writing this commands are: set setnx setex append
    #       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
    #       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
    #       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
    #       getset mset msetnx exec sort
    #
    # The default is:
    #
    # maxmemory-policy volatile-lru  

    # LRU and minimal TTL algorithms are not precise algorithms but approximated
    # algorithms (in order to save memory), so you can select as well the sample
    # size to check. For instance for default Redis will check three keys and
    # pick the one that was used less recently, you can change the sample size
    # using the following configuration directive.
    #
    # maxmemory-samples 3  

    ############################## APPEND ONLY MODE ###############################  

    # By default Redis asynchronously dumps the dataset on disk. If you can live
    # with the idea that the latest records will be lost if something like a crash
    # happens this is the preferred way to run Redis. If instead you care a lot
    # about your data and don't want to that a single record can get lost you should
    # enable the append only mode: when this mode is enabled Redis will append
    # every write operation received in the file appendonly.aof. This file will
    # be read on startup in order to rebuild the full dataset in memory.
    #
    # Note that you can have both the async dumps and the append only file if you
    # like (you have to comment the "save" statements above to disable the dumps).
    # Still if append only mode is enabled Redis will load the data from the
    # log file at startup ignoring the dump.rdb file.
    #
    # IMPORTANT: Check the BGREWRITEAOF to check how to rewrite the append
    # log file in background when it gets too big.  

    appendonly yes  

    # The name of the append only file (default: "appendonly.aof")
     appendfilename appendonly.aof  

    # The fsync() call tells the Operating System to actually write data on disk
    # instead to wait for more data in the output buffer. Some OS will really flush
    # data on disk, some other OS will just try to do it ASAP.
    #
    # Redis supports three different modes:
    #
    # no: don't fsync, just let the OS flush the data when it wants. Faster.
    # always: fsync after every write to the append only log . Slow, Safest.
    # everysec: fsync only if one second passed since the last fsync. Compromise.
    #
    # The default is "everysec" that's usually the right compromise between
    # speed and data safety. It's up to you to understand if you can relax this to
    # "no" that will will let the operating system flush the output buffer when
    # it wants, for better performances (but if you can live with the idea of
    # some data loss consider the default persistence mode that's snapshotting),
    # or on the contrary, use "always" that's very slow but a bit safer than
    # everysec.
    #
    # If unsure, use "everysec".  

    # appendfsync always
    appendfsync everysec
    # appendfsync no  

    # When the AOF fsync policy is set to always or everysec, and a background
    # saving process (a background save or AOF log background rewriting) is
    # performing a lot of I/O against the disk, in some Linux configurations
    # Redis may block too long on the fsync() call. Note that there is no fix for
    # this currently, as even performing fsync in a different thread will block
    # our synchronous write(2) call.
    #
    # In order to mitigate this problem it's possible to use the following option
    # that will prevent fsync() from being called in the main process while a
    # BGSAVE or BGREWRITEAOF is in progress.
    #
    # This means that while another child is saving the durability of Redis is
    # the same as "appendfsync none", that in pratical terms means that it is
    # possible to lost up to 30 seconds of log in the worst scenario (with the
    # default Linux settings).
    #
    # If you have latency problems turn this to "yes". Otherwise leave it as
    # "no" that is the safest pick from the point of view of durability.
    no-appendfsync-on-rewrite no  

    # Automatic rewrite of the append only file.
    # Redis is able to automatically rewrite the log file implicitly calling
    # BGREWRITEAOF when the AOF log size will growth by the specified percentage.
    #
    # This is how it works: Redis remembers the size of the AOF file after the
    # latest rewrite (or if no rewrite happened since the restart, the size of
    # the AOF at startup is used).
    #
    # This base size is compared to the current size. If the current size is
    # bigger than the specified percentage, the rewrite is triggered. Also
    # you need to specify a minimal size for the AOF file to be rewritten, this
    # is useful to avoid rewriting the AOF file even if the percentage increase
    # is reached but it is still pretty small.
    #
    # Specify a precentage of zero in order to disable the automatic AOF
    # rewrite feature.  

    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb  

    ################################## SLOW LOG ###################################  

    # The Redis Slow Log is a system to log queries that exceeded a specified
    # execution time. The execution time does not include the I/O operations
    # like talking with the client, sending the reply and so forth,
    # but just the time needed to actually execute the command (this is the only
    # stage of command execution where the thread is blocked and can not serve
    # other requests in the meantime).
    #
    # You can configure the slow log with two parameters: one tells Redis
    # what is the execution time, in microseconds, to exceed in order for the
    # command to get logged, and the other parameter is the length of the
    # slow log. When a new command is logged the oldest one is removed from the
    # queue of logged commands.  

    # The following time is expressed in microseconds, so 1000000 is equivalent
    # to one second. Note that a negative number disables the slow log, while
    # a value of zero forces the logging of every command.
    slowlog-log-slower-than 10000  

    # There is no limit to this length. Just be aware that it will consume memory.
    # You can reclaim memory used by the slow log with SLOWLOG RESET.
    slowlog-max-len 1024  

    ################################ VIRTUAL MEMORY ###############################  

    ### WARNING! Virtual Memory is deprecated in Redis 2.4
    ### The use of Virtual Memory is strongly discouraged.  

    # Virtual Memory allows Redis to work with datasets bigger than the actual
    # amount of RAM needed to hold the whole dataset in memory.
    # In order to do so very used keys are taken in memory while the other keys
    # are swapped into a swap file, similarly to what operating systems do
    # with memory pages.
    #
    # To enable VM just set 'vm-enabled' to yes, and set the following three
    # VM parameters accordingly to your needs.  

    vm-enabled no
     #vm-enabled yes  

    # This is the path of the Redis swap file. As you can guess, swap files
    # can't be shared by different Redis instances, so make sure to use a swap
    # file for every redis process you are running. Redis will complain if the
    # swap file is already in use.
    #
    # The best kind of storage for the Redis swap file (that's accessed at random)
    # is a Solid State Disk (SSD).
    #
    # *** WARNING *** if you are using a shared hosting the default of putting
    # the swap file under /tmp is not secure. Create a dir with access granted
    # only to Redis user and configure Redis to create the swap file there.
    vm-swap-file /tmp/redis.swap  

    # vm-max-memory configures the VM to use at max the specified amount of
    # RAM. Everything that deos not fit will be swapped on disk *if* possible, that
    # is, if there is still enough contiguous space in the swap file.
    #
    # With vm-max-memory 0 the system will swap everything it can. Not a good
    # default, just specify the max amount of RAM you can in bytes, but it's
    # better to leave some margin. For instance specify an amount of RAM
    # that's more or less between 60 and 80% of your free RAM.
    vm-max-memory 0  

    # Redis swap files is split into pages. An object can be saved using multiple
    # contiguous pages, but pages can't be shared between different objects.
    # So if your page is too big, small objects swapped out on disk will waste
    # a lot of space. If you page is too small, there is less space in the swap
    # file (assuming you configured the same number of total swap file pages).
    #
    # If you use a lot of small objects, use a page size of 64 or 32 bytes.
    # If you use a lot of big objects, use a bigger page size.
    # If unsure, use the default :)
    vm-page-size 32  

    # Number of total memory pages in the swap file.
    # Given that the page table (a bitmap of free/used pages) is taken in memory,
    # every 8 pages on disk will consume 1 byte of RAM.
    #
    # The total swap size is vm-page-size * vm-pages
    #
    # With the default of 32-bytes memory pages and 134217728 pages Redis will
    # use a 4 GB swap file, that will use 16 MB of RAM for the page table.
    #
    # It's better to use the smallest acceptable value for your application,
    # but the default is large in order to work in most conditions.
    vm-pages 134217728  

    # Max number of VM I/O threads running at the same time.
    # This threads are used to read/write data from/to swap file, since they
    # also encode and decode objects from disk to memory or the reverse, a bigger
    # number of threads can help with big objects even if they can't help with
    # I/O itself as the physical device may not be able to couple with many
    # reads/writes operations at the same time.
    #
    # The special value of 0 turn off threaded I/O and enables the blocking
    # Virtual Memory implementation.
    vm-max-threads 4  

    ############################### ADVANCED CONFIG ###############################  

    # Hashes are encoded in a special way (much more memory efficient) when they
    # have at max a given numer of elements, and the biggest element does not
    # exceed a given threshold. You can configure this limits with the following
    # configuration directives.
    hash-max-zipmap-entries 512
    hash-max-zipmap-value 64  

    # Similarly to hashes, small lists are also encoded in a special way in order
    # to save a lot of space. The special representation is only used when
    # you are under the following limits:
    list-max-ziplist-entries 512
    list-max-ziplist-value 64  

    # Sets have a special encoding in just one case: when a set is composed
    # of just strings that happens to be integers in radix 10 in the range
    # of 64 bit signed integers.
    # The following configuration setting sets the limit in the size of the
    # set in order to use this special memory saving encoding.
    set-max-intset-entries 512  

    # Similarly to hashes and lists, sorted sets are also specially encoded in
    # order to save a lot of space. This encoding is only used when the length and
    # elements of a sorted set are below the following limits:
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64  

    # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
    # order to help rehashing the main Redis hash table (the one mapping top-level
    # keys to values). The hash table implementation redis uses (see dict.c)
    # performs a lazy rehashing: the more operation you run into an hash table
    # that is rhashing, the more rehashing "steps" are performed, so if the
    # server is idle the rehashing is never complete and some more memory is used
    # by the hash table.
    #
    # The default is to use this millisecond 10 times every second in order to
    # active rehashing the main dictionaries, freeing memory when possible.
    #
    # If unsure:
    # use "activerehashing no" if you have hard latency requirements and it is
    # not a good thing in your environment that Redis can reply form time to time
    # to queries with 2 milliseconds delay.
    #
    # use "activerehashing yes" if you don't have such hard requirements but
    # want to free memory asap when possible.
    activerehashing yes  

    ################################## INCLUDES ###################################  

    # Include one or more other config files here.  This is useful if you
    # have a standard template that goes to all redis server but also need
    # to customize a few per-server settings.  Include files can include
    # other files, so use this wisely.
    #
    # include /path/to/local.conf
    # include /path/to/other.conf

中文说明:

1,是否以后台进程运行,默认为no
daemonize no

2,如以后台进程运行,则需指定一个pid,默认为/var/run/redis.pid
pidfile /var/run/redis.pid

3,监听端口,默认为6379
port 6379

4,绑定主机IP,默认值为127.0.0.1(注释)
bind 127.0.0.1

5,超时时间,默认为300(秒)
timeout 300

6,日志记录等级,有4个可选值,debug,verbose(默认值),notice,warning
loglevel verbose

7,日志记录方式,默认值为stdout
logfile stdout

8,可用数据库数,默认值为16,默认数据库为0
databases 16

9,指出在多长时间内,有多少次更新操作,就将数据同步到数据文件。这个可以多个条件配合,比如默认配置文件中的设置,就设置了三个条件。

900秒(15分钟)内至少有1个key被改变
save 900 1
300秒(5分钟)内至少有10个key被改变
save 300 10

10,存储至本地数据库时是否压缩数据,默认为yes
rdbcompression yes

11,本地数据库文件名,默认值为dump.rdb
dbfilename /root/redis_db/dump.rdb

12,本地数据库存放路径,默认值为 ./
dir /root/redis_db/

13,当本机为从服务时,设置主服务的IP及端口(注释)
slaveof

14,当本机为从服务时,设置主服务的连接密码(注释)
masterauth

15,连接密码(注释)
requirepass foobared

16,最大客户端连接数,默认不限制(注释)
maxclients 128

17,设置最大内存,达到最大内存设置后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理后,任到达最大内存设置,将无法再进行写入操作。(注释)
maxmemory

18,是否在每次更新操作后进行日志记录,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认值为no
appendonly yes

19,更新日志文件名,默认值为appendonly.aof(注释)
appendfilename /root/redis_db/appendonly.aof

20,更新日志条件,共有3个可选值。no表示等操作系统进行数据缓存同步到磁盘,always表示每次更新操作后手动调用fsync()将数据写到磁盘,everysec表示每秒同步一次(默认值)。
appendfsync everysec

21,是否使用虚拟内存,默认值为no
vm-enabled yes

22,虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享
vm-swap-file /tmp/redis.swap

23,将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的 (Redis的索引数据就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0。
vm-max-memory 0

24,虚拟内存文件以块存储,每块32bytes
vm-page-size 32

25,虚拟内在文件的最大数
vm-pages 134217728

26,可以设置访问swap文件的线程数,设置最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的.可能会造成比较长时间的延迟,但是对数据完整性有很好的保证.
vm-max-threads 4

27,把小的输出缓存放在一起,以便能够在一个TCP packet中为客户端发送多个响应,具体原理和真实效果我不是很清楚。所以根据注释,你不是很确定的时候就设置成yes
glueoutputbuf yes

28,在redis 2.0中引入了hash数据结构。当hash中包含超过指定元素个数并且最大的元素没有超过临界时,hash将以一种特殊的编码方式(大大减少内存使用)来存储,这里可以设置这两个临界值
hash-max-zipmap-entries 64

29,hash中一个元素的最大值
hash-max-zipmap-value 512

30,开启之后,redis将在每100毫秒时使用1毫秒的CPU时间来对redis的hash表进行重新hash,可以降低内存的使用。当你的使 用场景中,有非常严格的实时性需要,不能够接受Redis时不时的对请求有2毫秒的延迟的话,把这项配置为no。如果没有这么严格的实时性要求,可以设置 为yes,以便能够尽可能快的释放内存
activerehashing yes

可以参考:
Redis的部署使用文档 http://www.elain.org/?p=505

========================================================

安装PHP的Redis扩展

先去下载https://github.com/nicolasff/phpredis/downloads
#wget https://github.com/nicolasff/phpredis/downloads
# tar -zxvf nicolasff-phpredis-2.1.3-124-gd4ad907.tar.gz
# mv nicolasff-phpredis-d4ad907 php-5.3.8/ext/phpredis/
# cd php-5.3.8/ext/phpredis/
# /usr/local/php/bin/phpize
# ./configure –with-php-config=/usr/local/php/bin/php-config
# make && make install
配置php.ini

vi /usr/local/php/lib/php.ini
(加入:
extension=redis.so
)
先要看看有没有extension_dir=/…….
重启apache或者nginx

# /usr/local/apache2/bin/apachectl restart

测试代码:
[/php] view plaincopy

$redis = new Redis();
$redis->connect(’127.0.0.1′,6379);
$redis->set(‘test’,'hello world!’);
echo $redis->get(‘test’);
?>
参考:

Linux(CentOS 5.5) Redis 安装及RedisPHP拓展安装应用

http://www.linuxidc.com/Linux/2011-08/41404.htm

安装redis和phpredis模块

http://skandgjxa.blog.163.com/blog/static/14152982011712112933816/

RHEL5下编译安装Redis及其PHP扩展库

http://hi.baidu.com/zjstandup/blog/item/9f38b825d379c96c35a80f7f.html