由于长期使用Composer进行包依赖管理,许多公用的类库都会放在GitHub上,在多个项目之间引用依赖,更新版本时,减少了很多成本,但还是存在一些问题:

  • 每次更新,本地需要跑过单元你测试,Commit并Push到Github,等待Composer更新后才生效;
  • 公司级的模块没有办法放在Github上;

官方方案

庆幸的事,Composer 就提供私有仓库的形式:

{
    "repositories": [
        {
            "type": "vcs",
            "url": "http://svn.example.org/projectA/",
            "trunk-path": "Trunk",
            "branches-path": "Branches",
            "tags-path": "Tags"
        }
    ]
}

具体参考:https://getcomposer.org/doc/05-repositories.md

由于公司内使用的是 Subverison,使用如上方式导入私有仓库,实际上是通过 svn checkout 的形式将指定的 tags 检出,这时候vendor目录下将会出现 .svn 目录,导致主项目无法提交,甚至在切换tags时出现无法检出的情况。

Composer项目issue中有许多人反映此问题,解决方案是通过 svn export 来代替 svn checkout,需要使用如下插件:
https://github.com/LinearSoft/composer-svn-export

新方案

该插件等于是需要自己搭建一个packaglist站点,其实仅需要一个packages.json文件,但是需要有效的https服务,否则将会出现如下错误:

➜  www.my-project.com composer update
You are running composer with xdebug enabled. This has a major impact on runtime performance. See https://getcomposer.org/xdebug
Added SvnExport repo: NewPackage
Loading composer repositories with package information

                                                                                                                                             
  [Composer\Downloader\TransportException]                                                                                                   
  The "https://packagist.my-project.dev/packages.json" file could not be downloaded: SSL operation failed with code 1. OpenSSL Error messages:  
  error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed                                                          
  Failed to enable crypto                                                                                                                    
  failed to open stream: operation failed         

如果没有这样的https服务,可以选择在gist上建一个packages.json文件来达到效果:

{
    "packages": {
        "tc/analyse": {
            "0.9.2": {
                "name": "tc/analyse",
                "version": "0.9.2",
                "source": {
                    "type": "svn",
                    "url": "https://svn.tc.dev/library/analyse",
                    "reference": "/tags/0.9.2"
                }
            },
            "0.9.3": {
                "name": "tc/analyse",
                "version": "0.9.3",
                "source": {
                    "type": "svn",
                    "url": "https://svn.tc.dev/library/analyse",
                    "reference": "/tags/0.9.3"
                }
            },
            "dev-master": {
                "name": "tc/analyse",
                "version": "dev-master",
                "source": {
                    "type": "svn",
                    "url": "https://svn.tc.dev/library/analyse",
                    "reference": "/trunk"
                }
            }
        }
    }
}

主项目中composer.json填充上该Gist地址

{
    "require" : {
        "php" : ">=5.4.0",
        "tc/analyse" : "0.9.3",
        "linearsoft/composer-svn-export" : "^0.1.2"
    },
    "extra" : {
        "svn-export-repositories" : [
            {
                "name" : "TC",
                "type" : "composer",
                "url" : "https://gist.githubusercontent.com/lancerhe/379eeee89fd0db8c0c4ca19c6bddas2f/raw/915a720635949c9d4013746845b952f88af358db"
            }
        ]
    },
    "minimum-stability" : "stable"
}

运行 composer update

➜  www.my-project.com composer update
You are running composer with xdebug enabled. This has a major impact on runtime performance. See https://getcomposer.org/xdebug
Added SvnExport repo: TC
Loading composer repositories with package information
Updating dependencies (including require-dev)
  - Updating tapcash/analyse (0.9.2 => 0.9.3)
    Exporting /tags/0.9.3

Writing lock file
Generating autoload files

转眼已是2016年初三,今天正好回了一趟母校,最大的感触时间过得真快,已经毕业10年了,回想下来是该回头好好总结一下2015年。

先说说目标吧,当时14年结束的时候,在『百度无线91』给自己定了一个目标:研发架构师,回首再看确实目标定的有点高(估计一般都没有达成)!发现的同时在年中的时候我做了一个选择,从『百度无线91』离职,其主导原因是我希望能重新从『技术支持团队』回到『业务线团队』(毕竟我原来就是从业务线出来),当然也有其他一些复杂的原因,因此我现在就职于一家游戏公司(IGG)的广告业务团队,说起来还是干着广告这老本行。因此之前的计划:

  1. 保持读书计划,目标降低为12本,但是希望自己还是能超额完成。[未完成]
  2. 保持锻炼计划,身体好才是革命的本钱。[算是完成]
  3. 相比2014年的综合技能提升,其实专项技能的短板需要补足。[算是完成]
  4. 保持知识沉淀,合理利用Github。[完成]
  5. 给自己多一点话说的机会,学会引导技巧。[算是完成]
  6. 目标定位研发架构师,帮助团队整体水平提高。[未完成]

本文转载:aws ec2 硬盘 resize2fs

在申请 AWS EC2 时,按照向导,在选择存储的时候默认硬盘大小是 8G,这时候可以根据自己的需要输入一个合适的数字,例如100。完成向导并启动 EC2 instance 后登陆机器。使用命令:

df -hT

发现硬盘的大小不是自己的设定的值,而还是 8G,使用fdiskmkfs来分区和格式化后,还是无法增大其空间。反复折腾多次,包括重启机器,问题依旧,后来发现其实很简单,只需要使用一条命令resize2fs就可以搞定。

resize2fs /dev/xvde

注意:/dev/xvde 根据自己的实际情况可能会不一样。使用fdiskdf命令都可以获知具体的设备号。 如果执行上述命令收到 The filesystem is already 2096896 blocks long. Nothing to do! 的错误,那么需要先做如下操作:

<<1>> Look at the filesystem, it is 6G
<<2>> Look at the disk and the partition, the disk is 21.5 GB but the partition is 6 GB (6291456 blocks)
<<3>> Start fdisk for that disk (xvda, so not the partition xvda1)
<<4>> Switch to sector display.
<<5>> Print the partition(s), and remember the start sector (2048 in the example).
<<6>> Delete the partition.
<<7>> Create a new partition.
<<8>> Make it primary.
<<9>> First partition.
<<10>> Enter the old start sector, do NOT make any typo here!!! (2048 in the example) 
<<11>> Hit enter to accept the default (this is the remainder of the disk)
<<12>> Print the changes and make sure the start sector is ok, if not restart at <<6>>
<<13>> Make the partition bootable. do NOT forget this!!!
<<14>> Enter your partition number (1 in the example)
<<15>> Write the partition info back, this will end the fdisk session.
<<16>> Reboot the server, and wait for it to come up (this may take longer than usual).
<<17>> Verify the filesystem size.
<<18>> If the filesystem is not around 20Gb as expected, you can use this command.

# df -h  <<1>>

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      6.0G  2.0G  3.7G  35% / 
tmpfs            15G     0   15G   0% /dev/shm

# fdisk -l  <<2>>

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders
Units = cylinders of 1649 * 512 = 844288 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *           2        7632     6291456   83  Linux

# fdisk /dev/xvda  <<3>>

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): u  <<4>>
Changing display/entry units to sectors

Command (m for help): p  <<5>>

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *        2048    12584959     6291456   83  Linux

Command (m for help): d  <<6>>
Selected partition 1

Command (m for help): n  <<7>>
Command action
   e   extended
   p   primary partition (1-4)
p  <<8>>
Partition number (1-4): 1  <<9>>
First sector (17-41943039, default 17): 2048  <<10>>
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): <<11>>
Using default value 41943039

Command (m for help): p <<12>>

Disk /dev/xvda: 21.5 GB, 21474836480 bytes
97 heads, 17 sectors/track, 25435 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0003b587

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1            2048    41943039    20970496   83  Linux

Command (m for help): a  <<13>>
Partition number (1-4): 1  <<14>>


Command (m for help): w  <<15>>
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: ...
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

# reboot  <<16>>



# df -h  <<17>>
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  2.0G   17G  11% / 
tmpfs            15G     0   15G   0% /dev/shm

# resize2fs /dev/xvda1  <<18>>
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/xvda1 to 5242624 (4k) blocks.
The filesystem on /dev/xvda1 is now 5242624 blocks long.

root@vs120 [~]#  df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       20G  7.8G   11G  42% /
tmpfs           498M     0  498M   0% /dev/shm
/usr/tmpDSK     399M   11M  368M   3% /tmp

更多信息可以参考这里:http://serverfault.com/questions/414983/ec2-drive-not-ebs-volume-size

ShadowSocks属于Socks5代理,是一个开源项目。ShadowSocks使用自定义协议,屏蔽和干扰就更为困难,因此相对来说稳定一些。

Install

有网友做了个服务器端一键安装ShadowSocks的脚本,使用root用户登录,运行以下命令:

wget --no-check-certificate https://raw.githubusercontent.com/teddysun/shadowsocks_install/master/shadowsocks-libev.sh
chmod +x shadowsocks-libev.sh
./shadowsocks-libev.sh 2>&1 | tee shadowsocks-libev.log

安装完成后,得到的服务器端口:8383,客户端端口:1080,密码为自己设定的密码。

卸载方法:使用 root 用户登录,运行以下命令:

./shadowsocks-libev.sh uninstall

安装完成后即已后台启动 shadowsocks ,运行:

ps -ef | grep ss-server | grep -v ps | grep -v grep

可以查看进程是否存在。此脚本安装完成后,会将 shadowsocks-libev 加入开机自启动。

使用命令:

/etc/init.d/shadowsocks start
/etc/init.d/shadowsocks stop
/etc/init.d/shadowsocks restart
/etc/init.d/shadowsocks status

修改端口和加密方式:编辑修改配置文件

/etc/shadowsocks-libev/config.json

每隔1小时重启服务:

crontab -u root -e 输入 0 */1 * * * /etc/init.d/shadowsocks restart

ShadowSocks客户端可以参考这里的下载地址,安装完成后,配置客户端参数。

如果有智能路由器的话,在路由器上安装ShadowSocks,可以不需要在电脑安装客户端。

设置好了后,通过ShadowSocks上网的方案是,设置Shadowsocks为自动启动,其他设置和SSH几乎完全一样,在Dropbox等各类应用通过SOCK5连接,Chrome安装Proxy SwitchySharp或Proxy SwitchyOmega,代理服务器协议为SOCKS5,代理服务器地址127.0.0.1,代理端口1080,选自动切换模式。

Create S3 Bucket

  1. S3 Dashboard > [Create Bucket]
  2. Create a Bucket - Select a Bucket Name and Region
    AWS
  3. Bucket: [cash-001] AWS