Skip to content
This repository was archived by the owner on Feb 8, 2021. It is now read-only.

A draft commit of Kata support #727

Open
wants to merge 53 commits into
base: kata-support
Choose a base branch
from
Open
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
8aac0da
add kata-containers and the depended packages into vendor
lifupan Jun 6, 2018
78de507
update the vendor to coordinate with kata
lifupan Jun 6, 2018
cfa4183
replace runv with kata-containers
lifupan Jun 6, 2018
64c03ec
clean up the codes
Jun 15, 2018
07ed00b
gofmt for those codes
lifupan Jun 15, 2018
8b8b045
using the sandbox api instead of vc api
lifupan Jun 19, 2018
b4e91ad
enable save/load sandbox
lifupan Jun 19, 2018
f20ea2c
Associate containers with sandbox
lifupan Jun 20, 2018
73bcb9e
fix the issue of disassociating sandbox
lifupan Jun 20, 2018
2e70857
update vendor kata-containers to the latest version
lifupan Jun 21, 2018
b03861a
kata_agent: fix connection race
bergwolf Jun 21, 2018
d637321
Share the same pid namespace in a sandbox
lifupan Jun 21, 2018
086f31f
gofmt format the files
lifupan Jun 21, 2018
b90c98b
update vendor kata-containers/agent/protocols/grpc
lifupan Jun 21, 2018
29b7d94
Set sandbox.sharePidNs to false temporarily
lifupan Jun 21, 2018
57d713a
cleanup the sandbox after it exited
lifupan Jun 22, 2018
5db41e2
virtcontainers: To start the kataBuiltInProxy to watch the vm console…
lifupan Jun 23, 2018
43c5e34
don't store the containerconfig info into db
lifupan Jul 3, 2018
970024d
Only keep the aufs testing case
lifupan Jul 3, 2018
356dd86
replace vc.Sandbox with vc.VCSandbox
lifupan Jul 3, 2018
f2485ab
setup the kata-container runing env for test
lifupan Jul 3, 2018
d599637
container: fix the issue of missing Envs
lifupan Jul 5, 2018
dccecd9
fix the issue of stop sandbox
lifupan Jul 5, 2018
73b8c47
cleanup the legacy var stoppedChan
lifupan Jul 5, 2018
2fe3bfa
fix the issue of start container failed after restore pod
lifupan Jul 6, 2018
d421237
uprev vendor kata client
lifupan Jul 10, 2018
f75b33d
uprev vendor kata virtcontainers
lifupan Jul 10, 2018
b49cea7
Rename vendor Sirupsen to sirupsen according to upstream
lifupan Jul 10, 2018
f65fa51
uprev vendor intel/govmm/qemu
lifupan Jul 10, 2018
978c818
fix the issue of pausing sandbox
lifupan Jul 6, 2018
54c75d8
fix the issue of missing cmd from container image
lifupan Jul 9, 2018
d3ca391
pod: rollback the opertions once starting sandbox failed
lifupan Jul 9, 2018
9f39555
container: fix the issue of wrong RuntimeName
lifupan Jul 10, 2018
1ce4312
pod: fix the issue of missing execId for resize container tty
lifupan Jul 10, 2018
efce5ff
container: fix the issue of missing Env from container image
lifupan Jul 10, 2018
335688f
exec: fix the issue of waitexec process
lifupan Jul 11, 2018
a5841be
integration: fix the wrong exitcode in execsignal testcase
lifupan Jul 11, 2018
53bf002
decommission: do sanity check for pod.sandpox pointer
lifupan Jul 11, 2018
51ac7fe
provision: add the rollback function for createsandbox failed
lifupan Jul 11, 2018
81bc42b
container: don't specify the username in oci spec
lifupan Jul 11, 2018
9f5d2d4
fix the issue of missing hostname
lifupan Jul 11, 2018
9d2bee1
container: remove the unused ns from ocispec
lifupan Jul 12, 2018
10c4087
sandbox: add the volume support for sandbox
lifupan Jul 16, 2018
2814931
container: fix the issue of missing the entrypoint in cmd
lifupan Jul 18, 2018
98ab211
CI: comment out some testcases which are not supported
lifupan Jul 19, 2018
b484c8b
container: fix the issue of using the wrong user
lifupan Jul 19, 2018
35ab4dc
CI: fix the issue of missing 'ps' command in irssi:1 image
lifupan Jul 19, 2018
d6c68c3
container: fix the issue of missing some io contents
lifupan Jul 20, 2018
cb2dddf
exec: fix the issue of missing some io contents
lifupan Jul 21, 2018
fdf80d3
exec: fix the issue of wrong user
lifupan Jul 23, 2018
fe16d76
hack: fix the issue of irssi image missing ps cmd
lifupan Jul 23, 2018
50c5dad
volume: remove the redundancy mount
lifupan Jul 23, 2018
026a4d0
volume: fix the issue of missing readonly option
lifupan Jul 23, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
15 changes: 9 additions & 6 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
sudo: required
dist: trusty
dist: xenial

language: go
go_import_path: github.com/hyperhq/hyperd
@@ -9,14 +9,14 @@ go:

env:
- HYPER_EXEC_DRIVER=qemu HYPER_STORAGE_DRIVER=aufs
- HYPER_EXEC_DRIVER=qemu HYPER_STORAGE_DRIVER=rawblock
- HYPER_EXEC_DRIVER=libvirt HYPER_STORAGE_DRIVER=overlay
- HYPER_EXEC_DRIVER=libvirt HYPER_STORAGE_DRIVER=rawblock
- HYPER_EXEC_DRIVER=libvirt HYPER_STORAGE_DRIVER=btrfs
# - HYPER_EXEC_DRIVER=qemu HYPER_STORAGE_DRIVER=rawblock
# - HYPER_EXEC_DRIVER=libvirt HYPER_STORAGE_DRIVER=overlay
# - HYPER_EXEC_DRIVER=libvirt HYPER_STORAGE_DRIVER=rawblock
# - HYPER_EXEC_DRIVER=libvirt HYPER_STORAGE_DRIVER=btrfs

before_install:
- sudo apt-get update -qq
- sudo apt-get install -y -qq autoconf automake pkg-config libdevmapper-dev libsqlite3-dev libvirt-dev libvirt-bin aufs-tools wget libaio1 libpixman-1-0 netcat
- sudo apt-get install -y -qq autoconf automake pkg-config libdevmapper-dev libsqlite3-dev libvirt-dev libvirt-bin aufs-tools wget libaio1 libpixman-1-0 netcat realpath libelf-dev
- wget https://s3-us-west-1.amazonaws.com/hypercontainer-download/qemu-hyper/qemu-hyper_2.4.1-1_amd64.deb && sudo dpkg -i --force-all qemu-hyper_2.4.1-1_amd64.deb
- cd `mktemp -d`
- mkdir -p ${GOPATH}/src/github.com/hyperhq
@@ -33,7 +33,10 @@ install:
- hack/install-grpc.sh
- hack/verify-gofmt.sh
- hack/verify-generated-proto.sh
- sed -i 's/\(^.*defaultCPUModel.*=\)\(.*$\)/\1 \"core2duo\"/' vendor/github.com/kata-containers/runtime/virtcontainers/qemu_arch_base.go
- sed -i 's/accel=kvm/accel=tcg/' vendor/github.com/kata-containers/runtime/virtcontainers/qemu_amd64.go
- ./autogen.sh && ./configure && make

script:
- ./scripts/kata-env-setup.sh
- cd ${TRAVIS_BUILD_DIR} && hack/test-cmd.sh
440 changes: 276 additions & 164 deletions daemon/pod/container.go

Large diffs are not rendered by default.

124 changes: 63 additions & 61 deletions daemon/pod/decommission.go
Original file line number Diff line number Diff line change
@@ -10,10 +10,10 @@ import (
dockertypes "github.com/docker/engine-api/types"

"github.com/hyperhq/hyperd/utils"
"github.com/hyperhq/runv/hypervisor"
vc "github.com/kata-containers/runtime/virtcontainers"
)

type sandboxOp func(sb *hypervisor.Vm) error
type sandboxOp func(sb vc.VCSandbox) error
type stateValidator func(state PodState) bool

func (p *XPod) DelayDeleteOn() bool {
@@ -27,40 +27,37 @@ func (p *XPod) Stop(graceful int) error {
}

p.Log(DEBUG, "pod stopped, now wait cleanup")
if cleanup := p.waitStopDone(graceful, "stop container"); !cleanup {
p.Log(WARNING, "timeout while wait cleanup pod")
return fmt.Errorf("did not finish clean up in %d seconds", graceful)
}
p.cleanup()

return nil
}

func (p *XPod) ForceQuit() {
err := p.protectedSandboxOperation(
func(sb *hypervisor.Vm) error {
sb.Kill()
return nil
func(sb vc.VCSandbox) error {
_, err := vc.StopSandbox(sb.ID())
return err
},
time.Second*5,
"kill pod")
if err != nil {
p.Log(ERROR, "force quit failed: %v", err)
}
p.cleanup()
}

func (p *XPod) Remove(force bool) error {
var err error

if p.IsRunning() {
if !force {
err := fmt.Errorf("pod is running, cannot be removed")
err = fmt.Errorf("pod is running, cannot be removed")
p.Log(ERROR, err)
return err
}
p.Log(DEBUG, "stop pod before remove")
p.doStopPod(10)
if cleanup := p.waitStopDone(60, "Remove Pod"); !cleanup {
p.Log(WARNING, "timeout while waiting pod stopped")
}
p.cleanup()
}

p.resourceLock.Lock()
@@ -118,8 +115,8 @@ func (p *XPod) Pause() error {
p.statusLock.Unlock()

err := p.protectedSandboxOperation(
func(sb *hypervisor.Vm) error {
return sb.Pause(true)
func(sb vc.VCSandbox) error {
return sb.Pause()
},
time.Second*5,
"pause pod")
@@ -148,8 +145,18 @@ func (p *XPod) UnPause() error {
p.statusLock.Unlock()

err := p.protectedSandboxOperation(
func(sb *hypervisor.Vm) error {
return sb.Pause(false)
func(sb vc.VCSandbox) error {
var err error
defer func() {
if err == nil {
go p.waitVMStop()
}
}()
err = sb.Resume()
if err != nil {
return err
}
return nil
},
time.Second*5,
"resume pod")
@@ -176,8 +183,8 @@ func (p *XPod) KillContainer(id string, sig int64) error {
}
c.setKill()
return p.protectedSandboxOperation(
func(sb *hypervisor.Vm) error {
return sb.KillContainer(id, syscall.Signal(sig))
func(sb vc.VCSandbox) error {
return sb.SignalProcess(id, id, syscall.Signal(sig), true)
},
time.Second*5,
fmt.Sprintf("Kill container %s with %d", id, sig))
@@ -307,7 +314,7 @@ func (p *XPod) RemoveContainer(id string) error {
// protectedSandboxOperation() protect the hypervisor operations, which may
// panic or hang too long time.
func (p *XPod) protectedSandboxOperation(op sandboxOp, timeout time.Duration, comment string) error {
dangerousOp := func(sb *hypervisor.Vm, errChan chan<- error) {
dangerousOp := func(sb vc.VCSandbox, errChan chan<- error) {
defer func() {
err := recover()
if err != nil {
@@ -393,13 +400,13 @@ func (p *XPod) doStopPod(graceful int) error {
}

p.Log(INFO, "stop container success, shutdown sandbox")
result := p.sandbox.Shutdown()
if result.IsSuccess() {
_, err = vc.StopSandbox(p.sandbox.ID())
if err == nil {
p.Log(INFO, "pod is stopped")
return nil
}

err = fmt.Errorf("failed to shuting down: %s", result.Message())
err = fmt.Errorf("failed to shuting down: %s", err)
p.Log(ERROR, err)
return err
}
@@ -448,13 +455,20 @@ func (p *XPod) stopContainers(cList []string, graceful int) error {
}
future.Add(c.Id(), func() error {
var toc <-chan time.Time
var retch = make(chan int32)

if int64(graceful) < 0 {
toc = make(chan time.Time)
} else {
toc = time.After(waitTime)
}

forceKill := graceful == 0
resChan := p.sandbox.WaitProcess(true, []string{c.Id()}, -1)
go func(retch chan int32, c *Container) {
ret, _ := p.sandbox.WaitProcess(c.Id(), c.Id())
retch <- ret
}(retch, c)

c.Log(DEBUG, "now, stop container")
err := c.terminate(forceKill)
// TODO filter container/process can't find error
@@ -464,20 +478,11 @@ func (p *XPod) stopContainers(cList []string, graceful int) error {
return err
}
}
if resChan == nil {
err := fmt.Errorf("cannot wait container %s", c.Id())
p.Log(ERROR, err)
return err
}

for {
select {
case ex, ok := <-resChan:
if !ok {
err := fmt.Errorf("chan broken while waiting container: %s", c.Id())
p.Log(WARNING, err)
return err
}
p.Log(DEBUG, "container %s stopped (%v)", ex.Id, ex.Code)
case ret := <-retch:
p.Log(DEBUG, "container %s stopped (%d)", c.Id(), ret)
return nil
case <-toc:
if forceKill {
@@ -493,6 +498,7 @@ func (p *XPod) stopContainers(cList []string, graceful int) error {
}
}
return nil

})
}

@@ -505,24 +511,6 @@ func (p *XPod) stopContainers(cList []string, graceful int) error {
return nil
}

func (p *XPod) waitStopDone(timeout int, comments string) bool {
select {
case s, ok := <-p.stoppedChan:
if ok {
p.Log(DEBUG, "got stop msg and push it again: %s", comments)
select {
case p.stoppedChan <- s:
default:
}
}
p.Log(DEBUG, "wait stop done: %s", comments)
return true
case <-utils.Timeout(timeout):
p.Log(DEBUG, "wait stop timeout: %s", comments)
return false
}
}

// waitVMStop() should only be call for the life monitoring, others should wait the `waitStopDone`
func (p *XPod) waitVMStop() {
p.statusLock.RLock()
@@ -532,8 +520,20 @@ func (p *XPod) waitVMStop() {
}
p.statusLock.RUnlock()

_, _ = <-p.sandbox.WaitVm(-1)
p.Log(INFO, "got vm exit event")
monitor, err := p.sandbox.Monitor()
if err != nil {
p.Log(INFO, "cannot monitor the vm, %v", err)
} else {
ret, ok := <-monitor
/*get the sandbox released notification, thus it doesn't need to do cleanup*/
if !ok {
p.Log(INFO, "got vm disassociate event")
return
}
p.Log(INFO, "got vm exit event: %v", ret)
}
//in the future, needed to kill the sandbox and delete it here, in case
//there is dad sandbox process
p.cleanup()
}

@@ -576,10 +576,6 @@ func (p *XPod) cleanup() {
p.statusLock.Unlock()

p.Log(INFO, "pod stopped")
select {
case p.stoppedChan <- true:
default:
}
}

func (p *XPod) decommissionResources() (err error) {
@@ -615,7 +611,13 @@ func (p *XPod) decommissionResources() (err error) {
}
}

p.sandbox = nil
if p.sandbox != nil {
err = p.sandbox.Delete()
if err != nil {
p.Log(ERROR, "remove sandbox failed: %v", err)
}
p.sandbox = nil
}

cleanupHosts(p.Id())
// then it could be start again.
107 changes: 55 additions & 52 deletions daemon/pod/exec.go
Original file line number Diff line number Diff line change
@@ -8,11 +8,9 @@ import (
"time"

"github.com/docker/docker/pkg/stdcopy"

"github.com/hyperhq/hypercontainer-utils/hlog"
"github.com/hyperhq/hyperd/utils"
"github.com/hyperhq/runv/api"
"github.com/hyperhq/runv/hypervisor"
vc "github.com/kata-containers/runtime/virtcontainers"
)

type Exec struct {
@@ -57,7 +55,7 @@ func (p *XPod) CreateExec(containerId, cmds string, terminal bool) (string, erro
p.statusLock.Lock()
p.execs[execId] = &Exec{
Container: containerId,
Id: execId,
Id: "",
Cmds: command,
Terminal: terminal,
ExitCode: 255,
@@ -85,6 +83,7 @@ type writeCloser struct {
}

func (p *XPod) StartExec(stdin io.ReadCloser, stdout io.WriteCloser, containerId, execId string) error {

c, ok := p.containers[containerId]
if !ok {
err := fmt.Errorf("no container %s available for exec %s", containerId, execId)
@@ -103,7 +102,7 @@ func (p *XPod) StartExec(stdin io.ReadCloser, stdout io.WriteCloser, containerId
}

wReader := &waitClose{ReadCloser: stdin, wait: make(chan bool)}
tty := &hypervisor.TtyIO{
tty := &TtyIO{
Stdin: wReader,
Stdout: stdout,
}
@@ -124,21 +123,44 @@ func (p *XPod) StartExec(stdin io.ReadCloser, stdout io.WriteCloser, containerId
}
}

cmd := vc.Cmd{
Args: es.Cmds,
Envs: c.cmdEnvs([]vc.EnvVar{}),
WorkDir: c.spec.Workdir,
Interactive: es.Terminal,
Detach: !es.Terminal,
User: "0", //set the default user and group
PrimaryGroup: "0",
}

_, process, err := p.sandbox.EnterContainer(containerId, cmd)
if err != nil {
err := fmt.Errorf("cannot enter container %s, with err %s", containerId, err)
p.Log(ERROR, err)
return err
}
es.Id = process.Token

cstdin, cstdout, cstderr, err := p.sandbox.IOStream(containerId, es.Id)
if err != nil {
c.Log(ERROR, err)
return err
}

go streamCopy(tty, cstdin, cstdout, cstderr)

<-wReader.wait

go func(es *Exec) {
result := p.sandbox.WaitProcess(false, []string{execId}, -1)
if result == nil {
ret, err := p.sandbox.WaitProcess(containerId, es.Id)
if err != nil {
es.Log(ERROR, "can not wait exec")
return
}

r, ok := <-result
if !ok {
es.Log(ERROR, "waiting exec interrupted")
return
}
es.Log(DEBUG, "exec terminated at %v with code %d", time.Now(), int(ret))
es.ExitCode = uint8(ret)

es.Log(DEBUG, "exec terminated at %v with code %d", r.FinishedAt, r.Code)
es.ExitCode = uint8(r.Code)
select {
case es.finChan <- true:
es.Log(DEBUG, "wake exec stopped chan")
@@ -147,30 +169,7 @@ func (p *XPod) StartExec(stdin io.ReadCloser, stdout io.WriteCloser, containerId
}
}(es)

var envs []string
for e, v := range c.descript.Envs {
envs = append(envs, fmt.Sprintf("%s=%s", e, v))
}

process := &api.Process{
Container: es.Container,
Id: es.Id,
Terminal: es.Terminal,
Args: es.Cmds,
Envs: envs,
Workdir: c.descript.Workdir,
}

if c.descript.UGI != nil {
process.User = c.descript.UGI.User
process.Group = c.descript.UGI.Group
process.AdditionalGroup = c.descript.UGI.AdditionalGroups
}

err := p.sandbox.AddProcess(process, tty)

<-wReader.wait
return err
return nil
}

func (p *XPod) GetExecExitCode(containerId, execId string) (uint8, error) {
@@ -208,8 +207,8 @@ func (p *XPod) KillExec(execId string, sig int64) error {
}

return p.protectedSandboxOperation(
func(sb *hypervisor.Vm) error {
return sb.SignalProcess(es.Container, es.Id, syscall.Signal(sig))
func(sb vc.VCSandbox) error {
return sb.SignalProcess(es.Container, es.Id, syscall.Signal(sig), true)
},
time.Second*5,
fmt.Sprintf("Kill process %s with %d", es.Id, sig))
@@ -228,16 +227,20 @@ func (p *XPod) CleanupExecs() {
}

func (p *XPod) ExecVM(cmd string, stdin io.ReadCloser, stdout, stderr io.WriteCloser) (int, error) {
wReader := &waitClose{ReadCloser: stdin, wait: make(chan bool)}
tty := &hypervisor.TtyIO{
Stdin: wReader,
Stdout: stdout,
Stderr: stderr,
}
res, err := p.sandbox.HyperstartExec(cmd, tty)
if err != nil {
return res, err
}
<-wReader.wait
return res, err
/*
wReader := &waitClose{ReadCloser: stdin, wait: make(chan bool)}
tty := &hypervisor.TtyIO{
Stdin: wReader,
Stdout: stdout,
Stderr: stderr,
}
res, err := p.sandbox.HyperstartExec(cmd, tty)
if err != nil {
return res, err
}
<-wReader.wait
*/
// return res, err
return 0, nil
}
13 changes: 8 additions & 5 deletions daemon/pod/networks.go
Original file line number Diff line number Diff line change
@@ -76,11 +76,14 @@ func (inf *Interface) add() error {
inf.Log(ERROR, err)
return err
}
err := inf.p.sandbox.AddNic(inf.descript)
if err != nil {
inf.Log(ERROR, "failed to add NIC: %v", err)
}
return err
/*
err := inf.p.sandbox.AddNic(inf.descript)
if err != nil {
inf.Log(ERROR, "failed to add NIC: %v", err)
}
return err
*/
return nil
}

func (inf *Interface) cleanup() error {
30 changes: 18 additions & 12 deletions daemon/pod/persist.go
Original file line number Diff line number Diff line change
@@ -108,12 +108,6 @@ func LoadXPod(factory *PodFactory, layout *types.PersistPodLayout) (*XPod, error
}
}

for _, cid := range layout.Containers {
if err = p.loadContainer(cid); err != nil {
return nil, err
}
}

err = p.loadSandbox()
if err != nil {
if !strings.Contains(err.Error(), "leveldb: not found") {
@@ -123,6 +117,12 @@ func LoadXPod(factory *PodFactory, layout *types.PersistPodLayout) (*XPod, error
p.status = S_POD_STOPPED
}

for _, cid := range layout.Containers {
if err = p.loadContainer(cid); err != nil {
return nil, err
}
}

// if sandbox is running, set all volume INSERTED
if p.status == S_POD_RUNNING {
for _, v := range p.volumes {
@@ -356,10 +356,10 @@ func (p *XPod) removePortMappingFromDB() error {

func (c *Container) saveContainer() error {
cx := &types.PersistContainer{
Id: c.Id(),
Pod: c.p.Id(),
Spec: c.spec,
Descript: c.descript,
Id: c.Id(),
Pod: c.p.Id(),
Spec: c.spec,
// ContConfig: c.contConfig,
}
return saveMessage(c.p.factory.db, fmt.Sprintf(CX_KEY_FMT, c.Id()), cx, c, "container info")
}
@@ -448,6 +448,7 @@ func (inf *Interface) removeFromDB() error {
}

func (p *XPod) saveSandbox() error {

var (
sb types.SandboxPersistInfo
err error
@@ -461,14 +462,19 @@ func (p *XPod) saveSandbox() error {
p.statusLock.RLock()
defer p.statusLock.RUnlock()
if !stop_status[p.status] {
sb.Id = p.sandbox.Id
sb.PersistInfo, err = p.sandbox.Dump()
sb.Id = p.sandbox.ID()
/*By now the sandbox info had been managed by kata, thus there is no need
*to keep those info here.
*/
sb.PersistInfo = nil
if err != nil {
hlog.HLog(ERROR, p, 2, "failed to dump sandbox %s: %v", sb.Id, err)
return err
}
return saveMessage(p.factory.db, fmt.Sprintf(SB_KEY_FMT, p.Id()), &sb, p, "sandbox info")

}

return nil
}

61 changes: 29 additions & 32 deletions daemon/pod/pod.go
Original file line number Diff line number Diff line change
@@ -8,12 +8,10 @@ import (
"time"

"github.com/docker/docker/daemon/logger"

"github.com/hyperhq/hypercontainer-utils/hlog"
apitypes "github.com/hyperhq/hyperd/types"
"github.com/hyperhq/hyperd/utils"
"github.com/hyperhq/runv/hypervisor"
runvtypes "github.com/hyperhq/runv/hypervisor/types"
vc "github.com/kata-containers/runtime/virtcontainers"
)

const (
@@ -65,19 +63,15 @@ type XPod struct {

prestartExecs [][]string

sandbox *hypervisor.Vm
sandbox vc.VCSandbox
factory *PodFactory

info *apitypes.PodInfo
status PodState
execs map[string]*Exec
statusLock *sync.RWMutex
// stoppedChan: When the sandbox is down and the pod is stopped, a bool will be put into this channel,
// if you want to do some op after the pod is clean stopped, just wait for this channel. And if an op
// got a value from this chan, it should put an element to it again, in case other procedure may wait
// on it too.
stoppedChan chan bool
initCond *sync.Cond

initCond *sync.Cond

//Protected by statusLock
snapVolumes map[string]*apitypes.PodVolume
@@ -108,7 +102,7 @@ func (p *XPod) Name() string {
func (p *XPod) SandboxNameLocked() string {
var sbn = ""
if p.sandbox != nil {
sbn = p.sandbox.Id
sbn = p.sandbox.ID()
}
return sbn
}
@@ -301,16 +295,17 @@ func (p *XPod) ContainerInfo(cid string) (*apitypes.ContainerInfo, error) {

}

func (p *XPod) Stats() *runvtypes.PodStats {
func (p *XPod) Stats() *vc.SandboxStatus {
//use channel, don't block in resourceLock
ch := make(chan *runvtypes.PodStats, 1)

ch := make(chan *vc.SandboxStatus, 1)
var status vc.SandboxStatus
p.resourceLock.Lock()
if p.sandbox == nil {
ch <- nil
} else {
go func(sb *hypervisor.Vm) {
ch <- sb.Stats()
go func(sb vc.VCSandbox) {
status = sb.Status()
ch <- &status
}(p.sandbox)
}
p.resourceLock.Unlock()
@@ -336,7 +331,7 @@ func (p *XPod) initPodInfo() {
},
}
if p.sandbox != nil {
info.Vm = p.sandbox.Id
info.Vm = p.sandbox.ID()
}

p.info = info
@@ -388,10 +383,11 @@ func (p *XPod) updatePodInfo() error {
case S_POD_ERROR:
p.info.Status.Phase = "Failed"
}
if p.status == S_POD_RUNNING && p.sandbox != nil && len(p.info.Status.PodIP) == 0 {
p.info.Status.PodIP = p.sandbox.GetIPAddrs()
}

/*
if p.status == S_POD_RUNNING && p.sandbox != nil && len(p.info.Status.PodIP) == 0 {
p.info.Status.PodIP = p.sandbox.GetIPAddrs()
}
*/
return nil
}

@@ -503,7 +499,14 @@ func (p *XPod) TtyResize(cid, execId string, h, w int) error {
p.Log(ERROR, err)
return err
}
return p.sandbox.Tty(cid, execId, h, w)

//if doesn't specify the execId, it means to ttyresize the container's
//tty, thus the execId is the same with the container id for Kata container.
if execId == "" {
execId = cid
}

return p.sandbox.WinsizeProcess(cid, execId, uint32(h), uint32(w))
}

func (p *XPod) WaitContainer(cid string, second int) (int, error) {
@@ -522,19 +525,13 @@ func (p *XPod) WaitContainer(cid string, second int) (int, error) {
p.Log(DEBUG, "container is already stopped")
return 0, nil
}
ch := p.sandbox.WaitProcess(true, []string{cid}, second)
if ch == nil {
ret, err := p.sandbox.WaitProcess(cid, cid)
if err == nil {
c.Log(WARNING, "connot wait container, possiblely already down")
return -1, nil
}
r, ok := <-ch
if !ok {
err := fmt.Errorf("break")
c.Log(ERROR, "chan broken while waiting container")
return -1, err
}
c.Log(INFO, "container stopped: %v", r.Code)
return r.Code, nil
c.Log(INFO, "container stopped: %v", ret)
return int(ret), nil
}

func (p *XPod) RenameContainer(cid, name string) error {
207 changes: 105 additions & 102 deletions daemon/pod/portmappings.go
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
package pod

import (
"fmt"
"github.com/hyperhq/hypercontainer-utils/hlog"
"github.com/hyperhq/hyperd/networking/portmapping"
apitypes "github.com/hyperhq/hyperd/types"
@@ -66,134 +65,138 @@ func (p *XPod) flushPortMapping() error {
}

func (p *XPod) AddPortMapping(spec []*apitypes.PortMapping) error {
if !p.IsAlive() {
err := fmt.Errorf("portmapping could apply to running pod only (%v)", spec)
p.Log(ERROR, "port mapping failed: %v", err)
return err
}
p.resourceLock.Lock()
defer p.resourceLock.Unlock()
/* if !p.IsAlive() {
err := fmt.Errorf("portmapping could apply to running pod only (%v)", spec)
p.Log(ERROR, "port mapping failed: %v", err)
return err
}
p.resourceLock.Lock()
defer p.resourceLock.Unlock()
if p.containerIP == "" || len(spec) == 0 {
p.Log(INFO, "Skip port maping setup [%v], container IP: %s", spec, p.containerIP)
return nil
}
if p.containerIP == "" || len(spec) == 0 {
p.Log(INFO, "Skip port maping setup [%v], container IP: %s", spec, p.containerIP)
return nil
}
pms, err := translatePortMapping(spec)
if err != nil {
p.Log(ERROR, "failed to generate port mapping rules: %v", err)
return err
}
var extPrefix []string
if p.globalSpec.PortmappingWhiteLists != nil &&
len(p.globalSpec.PortmappingWhiteLists.InternalNetworks) > 0 &&
len(p.globalSpec.PortmappingWhiteLists.ExternalNetworks) > 0 {
extPrefix = p.globalSpec.PortmappingWhiteLists.ExternalNetworks
}
preExec, err := portmapping.SetupPortMaps(p.containerIP, extPrefix, pms)
if err != nil {
p.Log(ERROR, "failed to apply port mapping rules: %v", err)
return err
}
if len(preExec) > 0 {
p.prestartExecs = append(p.prestartExecs, preExec...)
if p.sandbox != nil {
for _, ex := range preExec {
_, stderr, err := p.sandbox.HyperstartExecSync(ex, nil)
if err != nil {
p.Log(ERROR, "failed to setup inSandbox mapping: %v [ %s", err, string(stderr))
return err
pms, err := translatePortMapping(spec)
if err != nil {
p.Log(ERROR, "failed to generate port mapping rules: %v", err)
return err
}
var extPrefix []string
if p.globalSpec.PortmappingWhiteLists != nil &&
len(p.globalSpec.PortmappingWhiteLists.InternalNetworks) > 0 &&
len(p.globalSpec.PortmappingWhiteLists.ExternalNetworks) > 0 {
extPrefix = p.globalSpec.PortmappingWhiteLists.ExternalNetworks
}
preExec, err := portmapping.SetupPortMaps(p.containerIP, extPrefix, pms)
if err != nil {
p.Log(ERROR, "failed to apply port mapping rules: %v", err)
return err
}
if len(preExec) > 0 {
p.prestartExecs = append(p.prestartExecs, preExec...)
if p.sandbox != nil {
for _, ex := range preExec {
_, stderr, err := p.sandbox.HyperstartExecSync(ex, nil)
if err != nil {
p.Log(ERROR, "failed to setup inSandbox mapping: %v [ %s", err, string(stderr))
return err
}
}
}
}
}

all := make([]*apitypes.PortMapping, len(p.portMappings)+len(spec))
copy(all, spec)
copy(all[len(spec):], p.portMappings)
p.portMappings = all
err = p.savePortMapping()
if err != nil {
p.Log(WARNING, "failed to persist new portmapping rules")
// ignore the error
err = nil
}
all := make([]*apitypes.PortMapping, len(p.portMappings)+len(spec))
copy(all, spec)
copy(all[len(spec):], p.portMappings)
p.portMappings = all
err = p.savePortMapping()
if err != nil {
p.Log(WARNING, "failed to persist new portmapping rules")
// ignore the error
err = nil
}
*/
return nil
}

type portMappingCompare func(pm1, pm2 *apitypes.PortMapping) bool

func (p *XPod) removePortMapping(tbr []*apitypes.PortMapping, eq portMappingCompare) error {
p.resourceLock.Lock()
defer p.resourceLock.Unlock()
/* p.resourceLock.Lock()
defer p.resourceLock.Unlock()
if p.containerIP == "" || len(p.portMappings) == 0 || len(tbr) == 0 {
return nil
}
if p.containerIP == "" || len(p.portMappings) == 0 || len(tbr) == 0 {
return nil
}
rm := make([]*apitypes.PortMapping, 0, len(p.portMappings))
other := make([]*apitypes.PortMapping, 0, len(p.portMappings))
rm := make([]*apitypes.PortMapping, 0, len(p.portMappings))
other := make([]*apitypes.PortMapping, 0, len(p.portMappings))
for _, pm := range p.portMappings {
selected := false
for _, sel := range tbr {
if eq(pm, sel) {
rm = append(rm, pm)
selected = true
break
for _, pm := range p.portMappings {
selected := false
for _, sel := range tbr {
if eq(pm, sel) {
rm = append(rm, pm)
selected = true
break
}
}
if !selected {
other = append(other, pm)
}
}
if !selected {
other = append(other, pm)
if len(rm) == 0 {
p.Log(DEBUG, "no portmapping to be removed by %v", tbr)
return nil
}
}
if len(rm) == 0 {
p.Log(DEBUG, "no portmapping to be removed by %v", tbr)
return nil
}
act, err := translatePortMapping(rm)
if err != nil {
p.Log(ERROR, "failed to generate removing rules: %v", err)
return err
}
act, err := translatePortMapping(rm)
if err != nil {
p.Log(ERROR, "failed to generate removing rules: %v", err)
return err
}
var extPrefix []string
if p.globalSpec.PortmappingWhiteLists != nil &&
len(p.globalSpec.PortmappingWhiteLists.InternalNetworks) > 0 &&
len(p.globalSpec.PortmappingWhiteLists.ExternalNetworks) > 0 {
extPrefix = p.globalSpec.PortmappingWhiteLists.ExternalNetworks
}
postExec, err := portmapping.ReleasePortMaps(p.containerIP, extPrefix, act)
if err != nil {
p.Log(ERROR, "failed to clean up rules: %v", err)
return err
}
var extPrefix []string
if p.globalSpec.PortmappingWhiteLists != nil &&
len(p.globalSpec.PortmappingWhiteLists.InternalNetworks) > 0 &&
len(p.globalSpec.PortmappingWhiteLists.ExternalNetworks) > 0 {
extPrefix = p.globalSpec.PortmappingWhiteLists.ExternalNetworks
}
postExec, err := portmapping.ReleasePortMaps(p.containerIP, extPrefix, act)
if err != nil {
p.Log(ERROR, "failed to clean up rules: %v", err)
return err
}
if len(postExec) > 0 {
// don't need to release prestartExec here, it is not persistent
if p.sandbox != nil {
for _, ex := range postExec {
_, stderr, err := p.sandbox.HyperstartExecSync(ex, nil)
if err != nil {
p.Log(ERROR, "failed to setup inSandbox mapping: %v [ %s", err, string(stderr))
return err
if len(postExec) > 0 {
// don't need to release prestartExec here, it is not persistent
if p.sandbox != nil {
for _, ex := range postExec {
_, stderr, err := p.sandbox.HyperstartExecSync(ex, nil)
if err != nil {
p.Log(ERROR, "failed to setup inSandbox mapping: %v [ %s", err, string(stderr))
return err
}
}
}
}
}
p.portMappings = other
err = p.savePortMapping()
if err != nil {
p.Log(WARNING, "failed to persist removed portmapping rules")
// ignore the error
err = nil
}
p.portMappings = other
err = p.savePortMapping()
if err != nil {
p.Log(WARNING, "failed to persist removed portmapping rules")
// ignore the error
err = nil
}
return err
return err
*/
return nil
}

func (p *XPod) RemovePortMappingByDest(spec []*apitypes.PortMapping) error {
118 changes: 60 additions & 58 deletions daemon/pod/provision.go
Original file line number Diff line number Diff line change
@@ -11,8 +11,8 @@ import (
"github.com/hyperhq/hyperd/errors"
apitypes "github.com/hyperhq/hyperd/types"
"github.com/hyperhq/hyperd/utils"
runv "github.com/hyperhq/runv/api"
"github.com/hyperhq/runv/hypervisor"
vc "github.com/kata-containers/runtime/virtcontainers"
)

var (
@@ -34,17 +34,22 @@ func CreateXPod(factory *PodFactory, spec *apitypes.UserPod) (*XPod, error) {
p.releaseNames(spec.Containers)
}
}()
err = p.createSandbox(spec) //TODO: add defer for rollback
if err != nil {
return nil, err
}

defer func() {
if err != nil && p.sandbox != nil {
p.sandbox.Kill()
status := p.sandbox.Status()
if status.State.State == vc.StateRunning {
vc.StopSandbox(p.sandbox.ID())
}
p.sandbox.Delete()
}
}()

err = p.createSandbox(spec)
if err != nil {
return nil, err
}

err = p.initResources(spec, true)
if err != nil {
return nil, err
@@ -99,7 +104,6 @@ func newXPod(factory *PodFactory, spec *apitypes.UserPod) (*XPod, error) {
execs: make(map[string]*Exec),
resourceLock: &sync.Mutex{},
statusLock: &sync.RWMutex{},
stoppedChan: make(chan bool, 1),
factory: factory,
snapVolumes: make(map[string]*apitypes.PodVolume),
snapContainers: make(map[string]*Container),
@@ -223,28 +227,39 @@ func (p *XPod) ContainerStart(cid string) error {

// Start() means start a STOPPED pod.
func (p *XPod) Start() error {
var err error

defer func() {
if err != nil && p.sandbox != nil {
status := p.sandbox.Status()
if status.State.State == vc.StateRunning {
vc.StopSandbox(p.sandbox.ID())
vc.DeleteSandbox(p.sandbox.ID())
}
}
}()

if p.IsStopped() {
if err := p.createSandbox(p.globalSpec); err != nil {
if err = p.createSandbox(p.globalSpec); err != nil {
p.Log(ERROR, "failed to create sandbox for the stopped pod: %v", err)
return err
}

if err := p.prepareResources(); err != nil {
if err = p.prepareResources(); err != nil {
return err
}

if err := p.addResourcesToSandbox(); err != nil {
if err = p.addResourcesToSandbox(); err != nil {
return err
}
}

err := p.waitPodRun("start pod")
err = p.waitPodRun("start pod")
if err != nil {
p.Log(ERROR, "wait running failed, cannot start pod")
return err
}
if err := p.startAll(); err != nil {
if err = p.startAll(); err != nil {
return err
}

@@ -253,7 +268,7 @@ func (p *XPod) Start() error {

func (p *XPod) createSandbox(spec *apitypes.UserPod) error {
//in the future, here
sandbox, err := startSandbox(p.factory.vmFactory, int(spec.Resource.Vcpu), int(spec.Resource.Memory), "", "")
sandbox, err := startSandbox(spec, "", "")
if err != nil {
p.Log(ERROR, err)
return err
@@ -263,51 +278,36 @@ func (p *XPod) createSandbox(spec *apitypes.UserPod) error {
return errors.ErrSandboxNotExist
}

config := &runv.SandboxConfig{
Hostname: spec.Hostname,
Dns: spec.Dns,
DnsOptions: spec.DnsOptions,
DnsSearch: spec.DnsSearch,
Neighbors: &runv.NeighborNetworks{
InternalNetworks: spec.PortmappingWhiteLists.InternalNetworks,
ExternalNetworks: spec.PortmappingWhiteLists.ExternalNetworks,
},
}

p.sandbox = sandbox
p.status = S_POD_STARTING

go p.waitVMStop()
err = sandbox.InitSandbox(config)
if err != nil {
go sandbox.Shutdown()
}
p.Log(INFO, "sandbox init result: %#v", err)
p.setPodInitStatus(err == nil)
return err
}

func (p *XPod) reconnectSandbox(sandboxId string, pinfo []byte) error {
var (
sandbox *hypervisor.Vm
err error
vcsandbox vc.VCSandbox
err error
)

if sandboxId != "" {
sandbox, err = hypervisor.AssociateVm(sandboxId, pinfo)
vcsandbox, err = vc.FetchSandbox(sandboxId)
if err != nil {
p.Log(ERROR, err)
sandbox = nil
vcsandbox = nil
}
}

if sandbox == nil {
if vcsandbox == nil {
p.status = S_POD_STOPPED
return err
}

p.status = S_POD_RUNNING
p.sandbox = sandbox
p.sandbox = vcsandbox
go p.waitVMStop()
return nil
}
@@ -464,23 +464,25 @@ func (p *XPod) addResourcesToSandbox() error {
p.Log(INFO, "adding resource to sandbox")
future := utils.NewFutureSet()

future.Add("addInterface", func() error {
for _, inf := range p.interfaces {
if err := inf.add(); err != nil {
return err
/*
future.Add("addInterface", func() error {
for _, inf := range p.interfaces {
if err := inf.add(); err != nil {
return err
}
}
}
err := p.sandbox.AddRoute()
if err != nil {
p.Log(ERROR, "fail to add Route: %v", err)
}
return err
})
err := p.sandbox.AddRoute()
if err != nil {
p.Log(ERROR, "fail to add Route: %v", err)
}
return err
})
for iv, vol := range p.volumes {
future.Add(iv, vol.add)
}
for iv, vol := range p.volumes {
future.Add(iv, vol.add)
}
*/
for ic, c := range p.containers {
future.Add(ic, c.addToSandbox)
}
@@ -499,16 +501,16 @@ func (p *XPod) addResourcesToSandbox() error {
func (p *XPod) startAll() error {
p.Log(INFO, "start all containers")
future := utils.NewFutureSet()

for _, pre := range p.prestartExecs {
p.Log(DEBUG, "run prestart exec %v", pre)
_, stderr, err := p.sandbox.HyperstartExecSync(pre, nil)
if err != nil {
p.Log(ERROR, "failed to execute prestart command %v: %v [ %s", pre, err, string(stderr))
return err
/*
for _, pre := range p.prestartExecs {
p.Log(DEBUG, "run prestart exec %v", pre)
_, stderr, err := p.sandbox.HyperstartExecSync(pre, nil)
if err != nil {
p.Log(ERROR, "failed to execute prestart command %v: %v [ %s", pre, err, string(stderr))
return err
}
}
}

*/
for ic, c := range p.containers {
future.Add(ic, c.start)
}
@@ -525,7 +527,7 @@ func (p *XPod) sandboxShareDir() string {
// the /dev/null is not a dir, then, can not create or open it
return "/dev/null/no-such-dir"
}
return filepath.Join(hypervisor.BaseDir, p.sandbox.Id, hypervisor.ShareDirTag)
return filepath.Join(hypervisor.BaseDir, p.sandbox.ID(), hypervisor.ShareDirTag)
}

func (p *XPod) waitPodRun(activity string) error {
94 changes: 69 additions & 25 deletions daemon/pod/sandbox.go
Original file line number Diff line number Diff line change
@@ -2,57 +2,101 @@ package pod

import (
"github.com/hyperhq/hypercontainer-utils/hlog"
"github.com/hyperhq/runv/factory"
"github.com/hyperhq/runv/hypervisor"
apitypes "github.com/hyperhq/hyperd/types"
vc "github.com/kata-containers/runtime/virtcontainers"
)

const (
defaultHypervisor = vc.QemuHypervisor
defaultProxy = vc.KataBuiltInProxyType
defaultShim = vc.KataBuiltInShimType
defaultAgent = vc.KataContainersAgent

DefaultKernel = "/usr/share/kata-containers/vmlinuz.container"
DefaultInitrd = "/usr/share/kata-containers/kata-containers-initrd.img"
DefaultImage = "/usr/share/kata-containers/kata-containers.img"
DefaultHyper = "/usr/bin/qemu-lite-system-x86_64"
)

const (
maxReleaseRetry = 3
MaxVCPUs = 4
)

func startSandbox(f factory.Factory, cpu, mem int, kernel, initrd string) (vm *hypervisor.Vm, err error) {
func startSandbox(spec *apitypes.UserPod, kernel, initrd string) (sandbox vc.VCSandbox, err error) {
var (
DEFAULT_CPU = 1
DEFAULT_MEM = 128
)

if cpu <= 0 {
cpu = DEFAULT_CPU
if spec.Resource.Vcpu <= 0 {
spec.Resource.Vcpu = int32(DEFAULT_CPU)
}
if mem <= 0 {
mem = DEFAULT_MEM
if spec.Resource.Memory <= 0 {
spec.Resource.Memory = int32(DEFAULT_MEM)
}

resource := vc.Resources{
Memory: uint(spec.Resource.Memory),
}

if kernel == "" {
hlog.Log(DEBUG, "get sandbox from factory: CPU: %d, Memory %d", cpu, mem)
vm, err = f.GetVm(cpu, mem)
} else {
hlog.Log(DEBUG, "The create sandbox with: kernel=%s, initrd=%s, cpu=%d, memory=%d", kernel, initrd, cpu, mem)
config := &hypervisor.BootConfig{
CPU: cpu,
Memory: mem,
Kernel: kernel,
Initrd: initrd,
}
vm, err = hypervisor.GetVm("", config, false)
kernel = DefaultKernel
}
if initrd == "" {
initrd = DefaultInitrd
}

params := []vc.Param{{Key: "agent.log", Value: "debug"}}

sandboxConfig := vc.SandboxConfig{
ID: spec.Id,
Hostname: spec.Hostname,
VMConfig: resource,

HypervisorType: defaultHypervisor,
HypervisorConfig: vc.HypervisorConfig{
HypervisorPath: DefaultHyper,
KernelParams: params,
KernelPath: kernel,
InitrdPath: initrd,
DefaultMaxVCPUs: MaxVCPUs,
},

AgentType: defaultAgent,
AgentConfig: vc.KataAgentConfig{LongLiveConn: true},

ProxyType: defaultProxy,
ProxyConfig: vc.ProxyConfig{},

ShimType: defaultShim,
ShimConfig: vc.ShimConfig{},

//there is a bug in kata-agent, thus set it false temporarily
SharePidNs: false,

// NetworkModel: vc.CNMNetworkModel,
// NetworkConfig: vc.NetworkConfig{},
}
vcsandbox, err := vc.RunSandbox(sandboxConfig)
if err != nil {
hlog.Log(ERROR, "failed to create a sandbox (cpu=%d, mem=%d kernel=%s initrd=%d): %v", cpu, mem, kernel, initrd, err)
hlog.Log(ERROR, "failed to create a sandbox")
return nil, err
}

return vm, err
return vcsandbox, err
}

func dissociateSandbox(sandbox *hypervisor.Vm, retry int) error {
func dissociateSandbox(sandbox vc.VCSandbox, retry int) error {
if sandbox == nil {
return nil
}

err := sandbox.ReleaseVm()
err := sandbox.Release()
if err != nil {
hlog.Log(WARNING, "SB[%s] failed to release sandbox: %v", sandbox.Id, err)
hlog.Log(INFO, "SB[%s] shutdown because of failed release", sandbox.Id)
sandbox.Kill()
hlog.Log(WARNING, "SB[%s] failed to release sandbox: %v", sandbox.ID(), err)
hlog.Log(INFO, "SB[%s] shutdown because of failed release", sandbox.ID())
_, err = vc.StopSandbox(sandbox.ID())
return err
}
return nil
82 changes: 43 additions & 39 deletions daemon/pod/servicediscovery.go
Original file line number Diff line number Diff line change
@@ -199,60 +199,64 @@ func (s *Services) commit(srvs []*apitypes.UserService, operation string) error

func (s *Services) commitToVm(patch []byte) error {
s.Log(TRACE, "commit IPVS service patch: \n%s", string(patch))

saveData, err := s.getFromVm()
if err != nil {
return err
}

clear := func() error {
cmd := []string{"ipvsadm", "-C"}
_, stderr, err := s.p.sandbox.HyperstartExecSync(cmd, nil)
/*
saveData, err := s.getFromVm()
if err != nil {
s.Log(ERROR, "clear ipvs rules failed: %v, %s", err, stderr)
return err
}
return nil
}
clear := func() error {
cmd := []string{"ipvsadm", "-C"}
_, stderr, err := s.p.sandbox.HyperstartExecSync(cmd, nil)
if err != nil {
s.Log(ERROR, "clear ipvs rules failed: %v, %s", err, stderr)
return err
}
apply := func(rules []byte) error {
cmd := []string{"ipvsadm", "-R"}
_, stderr, err := s.p.sandbox.HyperstartExecSync(cmd, rules)
if err != nil {
s.Log(ERROR, "apply ipvs rules failed: %v, %s", err, stderr)
return err
return nil
}
return nil
}
apply := func(rules []byte) error {
cmd := []string{"ipvsadm", "-R"}
_, stderr, err := s.p.sandbox.HyperstartExecSync(cmd, rules)
if err != nil {
s.Log(ERROR, "apply ipvs rules failed: %v, %s", err, stderr)
return err
}
if err = apply(patch); err != nil {
// restore original ipvs services
err1 := clear()
if err1 != nil {
s.Log(ERROR, "restore original ipvs services failed in clear stage: %v", err1)
return err
}
err1 = apply(saveData)
if err1 != nil {
s.Log(ERROR, "restore original ipvs services failed in apply stage: %v", err1)
return nil
}
return err
}
if err = apply(patch); err != nil {
// restore original ipvs services
err1 := clear()
if err1 != nil {
s.Log(ERROR, "restore original ipvs services failed in clear stage: %v", err1)
return err
}
err1 = apply(saveData)
if err1 != nil {
s.Log(ERROR, "restore original ipvs services failed in apply stage: %v", err1)
}
return err
}
*/
return nil
}

func (s *Services) getFromVm() ([]byte, error) {
cmd := []string{"ipvsadm", "-Ln"}
stdout, stderr, err := s.p.sandbox.HyperstartExecSync(cmd, nil)
if err != nil {
s.Log(ERROR, "get ipvs service from vm failed: %v, %s", err, stderr)
return nil, err
}
/* cmd := []string{"ipvsadm", "-Ln"}
stdout, stderr, err := s.p.sandbox.HyperstartExecSync(cmd, nil)
if err != nil {
s.Log(ERROR, "get ipvs service from vm failed: %v, %s", err, stderr)
return nil, err
}
return stdout, nil
*/
return nil, nil

return stdout, nil
}

func (s *Services) size() int {
52 changes: 52 additions & 0 deletions daemon/pod/streams.go
Original file line number Diff line number Diff line change
@@ -5,6 +5,7 @@ import (
"io"
"io/ioutil"
"strings"
"sync"

"github.com/docker/docker/pkg/broadcaster"
"github.com/docker/docker/pkg/ioutils"
@@ -34,6 +35,30 @@ func (sc *StreamCloser) Close() error {
return sc.Clean()
}

type TtyIO struct {
Stdin io.ReadCloser
Stdout io.Writer
Stderr io.Writer
}

func (tty *TtyIO) Close() {
// hlog.Log(TRACE, "Close tty")

if tty.Stdin != nil {
tty.Stdin.Close()
}
cf := func(w io.Writer) {
if w == nil {
return
}
if c, ok := w.(io.WriteCloser); ok {
c.Close()
}
}
cf(tty.Stdout)
cf(tty.Stderr)
}

// NewStreamConfig creates a stream config and initializes
// the standard err and standard out to new unbuffered broadcasters.
func NewStreamConfig() *StreamConfig {
@@ -113,3 +138,30 @@ func (streamConfig *StreamConfig) CloseStreams() error {

return nil
}

func streamCopy(tty *TtyIO, stdinPipe io.WriteCloser, stdoutPipe, stderrPipe io.Reader) {
var wg sync.WaitGroup

if tty.Stdin != nil {
go func() {
_, _ = io.Copy(stdinPipe, tty.Stdin)
stdinPipe.Close()
}()
}
if tty.Stdout != nil {
wg.Add(1)
go func() {
_, _ = io.Copy(tty.Stdout, stdoutPipe)
wg.Done()
}()
}
if tty.Stderr != nil && stderrPipe != nil {
wg.Add(1)
go func() {
_, _ = io.Copy(tty.Stderr, stderrPipe)
wg.Done()
}()
}
wg.Wait()
tty.Close()
}
52 changes: 28 additions & 24 deletions daemon/pod/volume.go
Original file line number Diff line number Diff line change
@@ -105,13 +105,14 @@ func (v *Volume) add() error {
// the class.
func (v *Volume) insert() error {
v.Log(DEBUG, "insert volume to sandbox")
r := v.p.sandbox.AddVolume(v.descript)
if !r.IsSuccess() {
err := fmt.Errorf("failed to insert: %s", r.Message())
v.Log(ERROR, err)
return err
}

/*
r := v.p.sandbox.AddVolume(v.descript)
if !r.IsSuccess() {
err := fmt.Errorf("failed to insert: %s", r.Message())
v.Log(ERROR, err)
return err
}
*/
v.Log(INFO, "volume inserted")
return nil
}
@@ -131,24 +132,27 @@ func (v *Volume) removeFromSandbox() error {
}

func (v *Volume) tryRemoveFromSandbox() (bool, error) {
var (
removed bool
err error
)
r := v.p.sandbox.RemoveVolume(v.spec.Name)
removed = r.IsSuccess()
if !removed && (r.Message() != "in use") {
err = fmt.Errorf("failed to remove vol from sandbox: %s", r.Message())
v.Log(ERROR, err)
}
/* var (
removed bool
err error
)
r := v.p.sandbox.RemoveVolume(v.spec.Name)
removed = r.IsSuccess()
if !removed && (r.Message() != "in use") {
err = fmt.Errorf("failed to remove vol from sandbox: %s", r.Message())
v.Log(ERROR, err)
}
if removed {
v.Lock()
v.status = S_VOLUME_CREATED
v.Unlock()
}
v.Log(INFO, "volume remove from sandbox (removed: %v)", removed)
return removed, err
if removed {
v.Lock()
v.status = S_VOLUME_CREATED
v.Unlock()
}
v.Log(INFO, "volume remove from sandbox (removed: %v)", removed)
return removed, err
*/
return true, nil
}

// mount() should only called by add(), and not expose to outside
2 changes: 1 addition & 1 deletion hack/lib/test.sh
Original file line number Diff line number Diff line change
@@ -174,7 +174,7 @@ hyper::test::remove_container_with_volume() {
hyper::test::imageuser() {
echo "Pod image user config test"
# irssi image has "User": "user"
id=$(sudo hyperctl run -d --env="TERM=xterm" irssi:1 | sed -ne "s/POD id is \(.*\)/\1/p")
id=$(sudo hyperctl run -d --env="TERM=xterm" irssi:1.0 | sed -ne "s/POD id is \(.*\)/\1/p")
res=$(sudo hyperctl exec $id ps aux | grep user > /dev/null 2>&1; echo $?)
sudo hyperctl rm $id
test $res -eq 0
2 changes: 1 addition & 1 deletion hack/pods/user-override.pod
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"containers" : [{
"image": "irssi:1",
"image": "irssi:1.0",
"user": {
"name": "nobody"
},
6 changes: 3 additions & 3 deletions hack/test-cmd.sh
Original file line number Diff line number Diff line change
@@ -189,8 +189,8 @@ __EOF__
hyper::test::map_file
hyper::test::with_volume
hyper::test::service
hyper::test::nfs_volume
hyper::test::execvm
# hyper::test::nfs_volume
# hyper::test::execvm
hyper::test::remove_container_with_volume
hyper::test::imageuser
hyper::test::imageusergroup
@@ -199,7 +199,7 @@ __EOF__
hyper::test::force_kill_container
hyper::test::container_logs_no_newline
hyper::test::container_readonly_rootfs_and_volume
hyper::test::portmapping
# hyper::test::portmapping

stop_hyperd
}
2 changes: 1 addition & 1 deletion image/tarexport/load.go
Original file line number Diff line number Diff line change
@@ -8,7 +8,6 @@ import (
"os"
"path/filepath"

"github.com/Sirupsen/logrus"
"github.com/docker/docker/image"
"github.com/docker/docker/image/v1"
"github.com/docker/docker/layer"
@@ -18,6 +17,7 @@ import (
"github.com/docker/docker/reference"
digest "github.com/opencontainers/go-digest"
ociv1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
)

func (l *tarexporter) Load(inTar io.ReadCloser, name string, refs map[string]string, outStream io.Writer) error {
5 changes: 4 additions & 1 deletion integration/hyper_test.go
Original file line number Diff line number Diff line change
@@ -590,7 +590,10 @@ func (s *TestSuite) TestSendExecSignal(c *C) {

exitCode, err := s.client.Wait(cName, execId, false)
c.Assert(err, IsNil)
c.Assert(exitCode, Equals, int32(0))
//in kata, the exitCode is the process's exit code,
//thus, the process is killed with signal '9', its
//exitcode is '127'.
c.Assert(exitCode, Equals, int32(137))
}

func (s *TestSuite) TestTTYResize(c *C) {
14 changes: 14 additions & 0 deletions scripts/install_agent.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/bash
#
# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

set -e

cidir=$(dirname "$0")

source "${cidir}/lib.sh"

clone_build_and_install "github.com/kata-containers/agent"
136 changes: 136 additions & 0 deletions scripts/install_kata_image.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
#!/bin/bash
#
# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

set -o errexit
set -o nounset
set -o pipefail

cidir=$(dirname "$0")

source /etc/os-release
source "${cidir}/lib.sh"

AGENT_INIT="yes"
TEST_INITRD="yes"

TMP_DIR=
ROOTFS_DIR=

PACKAGED_IMAGE="kata-containers-image"
IMG_PATH="/usr/share/kata-containers"
IMG_NAME="kata-containers.img"

agent_path="${GOPATH}/src/github.com/kata-containers/agent"

IMG_MOUNT_DIR=
LOOP_DEVICE=

# Build Kata agent
bash -f "${cidir}/install_agent.sh"
agent_commit=$(git --work-tree="${agent_path}" --git-dir="${agent_path}/.git" log --format=%h -1 HEAD)

cleanup() {
[ -d "${ROOTFS_DIR}" ] && [[ "${ROOTFS_DIR}" = *"rootfs"* ]] && sudo rm -rf "${ROOTFS_DIR}"
[ -d "${TMP_DIR}" ] && rm -rf "${TMP_DIR}"
if [ -n "${IMG_MOUNT_DIR}" ] && mount | grep -q "${IMG_MOUNT_DIR}"; then
sudo umount "${IMG_MOUNT_DIR}"
fi
if [ -d "${IMG_MOUNT_DIR}" ]; then
rm -rf "${IMG_MOUNT_DIR}"
fi
if [ -n "${LOOP_DEVICE}" ]; then
sudo losetup -d "${LOOP_DEVICE}"
fi
}

trap cleanup EXIT

get_packaged_agent_version() {
version=$(ls "$IMG_PATH" | grep "$PACKAGED_IMAGE" | cut -d'_' -f4 | cut -d'.' -f1)
if [ -z "$version" ]; then
die "unknown agent version"
fi
echo "$version"
}

install_packaged_image() {
if [ "$ID" == "ubuntu" ]; then
sudo -E apt install -y "$PACKAGED_IMAGE"
elif [ "$ID" == "fedora" ]; then
sudo -E dnf install -y "$PACKAGED_IMAGE"
elif [ "$ID" == "centos" ]; then
sudo -E yum install -y "$PACKAGED_IMAGE"
else
die "Linux distribution not supported"
fi
}

update_agent() {
pushd "$agent_path"

LOOP_DEVICE=$(sudo losetup -f --show "${IMG_PATH}/${IMG_NAME}")
IMG_MOUNT_DIR=$(mktemp -d -t kata-image-mount.XXXXXXXXXX)
sudo partprobe "$LOOP_DEVICE"
sudo mount "${LOOP_DEVICE}p1" "$IMG_MOUNT_DIR"

echo "Old agent version:"
"${IMG_MOUNT_DIR}/usr/bin/kata-agent" --version

echo "Install new agent"
sudo -E PATH="$PATH" bash -c "make install DESTDIR=$IMG_MOUNT_DIR"
installed_version=$("${IMG_MOUNT_DIR}/usr/bin/kata-agent" --version)
echo "New agent version: $installed_version"

popd
installed_version=${installed_version##k*-}
[[ "${installed_version}" == *"${current_version}"* ]]
}

build_image() {
TMP_DIR=$(mktemp -d -t kata-image-install.XXXXXXXXXX)
readonly ROOTFS_DIR="${TMP_DIR}/rootfs"
export ROOTFS_DIR

image_type=$(get_version "assets.image.meta.image-type")
OSBUILDER_DISTRO=${OSBUILDER_DISTRO:-$image_type}

osbuilder_repo="github.com/kata-containers/osbuilder"

# Clone os-builder repository
go get -d "${osbuilder_repo}" || true

(cd "${GOPATH}/src/${osbuilder_repo}/rootfs-builder" && \
sudo -E AGENT_INIT="${AGENT_INIT}" AGENT_VERSION="${agent_commit}" \
GOPATH="$GOPATH" USE_DOCKER=true ./rootfs.sh "alpine")

# Build the image
if [ x"${TEST_INITRD}" == x"yes" ]; then
pushd "${GOPATH}/src/${osbuilder_repo}/initrd-builder"
sudo -E AGENT_INIT="${AGENT_INIT}" USE_DOCKER=true ./initrd_builder.sh "$ROOTFS_DIR"
image_name="kata-containers-initrd.img"
else
pushd "${GOPATH}/src/${osbuilder_repo}/image-builder"
sudo -E AGENT_INIT="${AGENT_INIT}" USE_DOCKER=true ./image_builder.sh "$ROOTFS_DIR"
image_name="kata-containers.img"
fi

# Install the image
commit=$(git log --format=%h -1 HEAD)
date=$(date +%Y-%m-%d-%T.%N%z)
image="kata-containers-${date}-osbuilder-${commit}-agent-${agent_commit}"

sudo install -o root -g root -m 0640 -D ${image_name} "/usr/share/kata-containers/${image}"
(cd /usr/share/kata-containers && sudo ln -sf "$image" ${image_name})

popd
}

main() {
build_image
}

main
126 changes: 126 additions & 0 deletions scripts/install_kata_kernel.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
#!/bin/bash
#
# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# Currently we will use this repository until this issue is solved
# See https://github.com/kata-containers/packaging/issues/1

set -e

cidir=$(dirname "$0")
source "${cidir}/lib.sh"
source "/etc/os-release"

repo_name="packaging"
repo_owner="kata-containers"
kata_kernel_dir="/usr/share/kata-containers"
kernel_arch="$(arch)"
get_kernel_url="https://cdn.kernel.org/pub/linux/kernel"
tmp_dir="$(mktemp -d)"
hypervisor="kvm"
packaged_kernel="kata-linux-container"

download_repo() {
pushd ${tmp_dir}
git clone --depth 1 https://github.com/${repo_owner}/${repo_name}
popd
}

get_current_kernel_version() {
kernel_version=$(get_version "assets.kernel.version")
echo "${kernel_version/v/}"
}

get_kata_config_version() {
pushd "${tmp_dir}/${repo_name}" >> /dev/null
kata_config_version=$(cat kernel/kata_config_version)
popd >> /dev/null
echo "${kata_config_version}"
}

get_packaged_kernel_version() {
if [ "$ID" == "ubuntu" ]; then
kernel_version=$(sudo apt-cache madison $packaged_kernel | awk '{print $3}' | cut -d'-' -f1)
elif [ "$ID" == "fedora" ]; then
kernel_version=$(sudo dnf --showduplicate list ${packaged_kernel}.${kernel_arch} | awk '/'$packaged_kernel'/ {print $2}' | cut -d'-' -f1)
elif [ "$ID" == "centos" ]; then
kernel_version=$(sudo yum --showduplicate list $packaged_kernel | awk '/'$packaged_kernel'/ {print $2}' | cut -d'-' -f1)
fi

if [ -z "$kernel_version" ]; then
die "unknown kernel version"
else
echo "${kernel_version}"
fi

}

# download the linux kernel, first argument is the kernel version
download_kernel() {
kernel_version=$1
pushd $tmp_dir
kernel_tar_file="linux-${kernel_version}.tar.xz"
kernel_url="${get_kernel_url}/v$(echo $kernel_version | cut -f1 -d.).x/${kernel_tar_file}"
curl -LOk ${kernel_url}
tar -xf ${kernel_tar_file}
popd
}

# build the linux kernel, first argument is the kernel version
build_and_install_kernel() {
kernel_version=$1
pushd ${tmp_dir}
kernel_config_file=$(realpath ${repo_name}/kernel/configs/[${kernel_arch}]*_kata_${hypervisor}_* | tail -1)
kernel_patches=$(realpath ${repo_name}/kernel/patches/*)
kernel_src_dir="linux-${kernel_version}"
pushd ${kernel_src_dir}
cp ${kernel_config_file} .config
for p in ${kernel_patches}; do patch -p1 < $p; done
make -s ARCH=${kernel_arch} oldconfig > /dev/null
if [ $CI == "true" ]; then
make ARCH=${kernel_arch} -j$(nproc)
else
make ARCH=${kernel_arch}
fi
sudo mkdir -p ${kata_kernel_dir}
sudo cp -a "$(realpath arch/${kernel_arch}/boot/bzImage)" "${kata_kernel_dir}/vmlinuz.container"
sudo cp -a "$(realpath vmlinux)" "${kata_kernel_dir}/vmlinux.container"
popd
popd
}

install_packaged_kernel(){
if [ "$ID" == "ubuntu" ]; then
sudo apt install -y "$packaged_kernel"
elif [ "$ID" == "fedora" ]; then
sudo dnf install -y "$packaged_kernel"
elif [ "$ID" == "centos" ]; then
sudo yum install -y "$packaged_kernel"
else
die "Unrecognized distro"
fi
}

cleanup() {
rm -rf "${tmp_dir}"
}

main() {
download_repo
kernel_version="$(get_current_kernel_version)"
kata_config_version="$(get_kata_config_version)"
current_kernel_version="${kernel_version}.${kata_config_version}"
packaged_kernel_version=$(get_packaged_kernel_version)
if [ "$packaged_kernel_version" == "$current_kernel_version" ] && [ "$kernel_arch" == "x86_64" ]; then
install_packaged_kernel
else
download_kernel ${kernel_version}
build_and_install_kernel ${kernel_version}
cleanup
fi
}

main
81 changes: 81 additions & 0 deletions scripts/install_qemu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
#!/bin/bash
#
# Copyright (c) 2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

set -e

cidir=$(dirname "$0")
source "${cidir}/lib.sh"
source /etc/os-release

CURRENT_QEMU_COMMIT=$(get_version "assets.hypervisor.qemu-lite.commit")
PACKAGED_QEMU="qemu-lite"
QEMU_ARCH=$(arch)

get_packaged_qemu_commit() {
if [ "$ID" == "ubuntu" ]; then
qemu_commit=$(sudo apt-cache madison $PACKAGED_QEMU \
| awk '{print $3}' | cut -d'-' -f1 | cut -d'.' -f4)
elif [ "$ID" == "fedora" ]; then
qemu_commit=$(sudo dnf --showduplicate list ${PACKAGED_QEMU}.${QEMU_ARCH} \
| awk '/'$PACKAGED_QEMU'/ {print $2}' | cut -d'-' -f1 | cut -d'.' -f4)
elif [ "$ID" == "centos" ]; then
qemu_commit=$(sudo yum --showduplicate list $PACKAGED_QEMU \
| awk '/'$PACKAGED_QEMU'/ {print $2}' | cut -d'-' -f1 | cut -d'.' -f4)
fi

if [ -z "$qemu_commit" ]; then
die "unknown qemu commit"
else
echo "${qemu_commit}"
fi
}

install_packaged_qemu() {
if [ "$ID" == "ubuntu" ]; then
sudo apt install -y "$PACKAGED_QEMU"
elif [ "$ID" == "fedora" ]; then
sudo dnf install -y "$PACKAGED_QEMU"
elif [ "$ID" == "centos" ]; then
sudo yum install -y "$PACKAGED_QEMU"
else
die "Unrecognized distro"
fi
}

build_and_install_qemu() {
QEMU_REPO=$(get_version "assets.hypervisor.qemu-lite.url")
# Remove 'https://' from the repo url to be able to clone the repo using 'go get'
QEMU_REPO=${QEMU_REPO/https:\/\//}
PACKAGING_REPO="github.com/kata-containers/packaging"
QEMU_CONFIG_SCRIPT="${GOPATH}/src/${PACKAGING_REPO}/scripts/configure-hypervisor.sh"

go get -d "${QEMU_REPO}" || true
go get -d "$PACKAGING_REPO" || true

pushd "${GOPATH}/src/${QEMU_REPO}"
git fetch
git checkout "$CURRENT_QEMU_COMMIT"
[ -d "capstone" ] || git clone https://github.com/qemu/capstone.git capstone
[ -d "ui/keycodemapdb" ] || git clone https://github.com/qemu/keycodemapdb.git ui/keycodemapdb

echo "Build Qemu"
"${QEMU_CONFIG_SCRIPT}" "qemu" | sed 's/--static//' | sed 's/--disable-tcg//' | xargs ./configure
make -j $(nproc)

echo "Install Qemu"
sudo -E make install

# Add link from /usr/local/bin to /usr/bin
sudo ln -sf $(command -v qemu-system-${QEMU_ARCH}) "/usr/bin/qemu-lite-system-${QEMU_ARCH}"
popd
}

main() {
build_and_install_qemu
}

main
12 changes: 12 additions & 0 deletions scripts/kata-env-setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/bash

set -e

cidir=$(dirname "$0")

source "${cidir}/lib.sh"

clone_repo "github.com/kata-containers/runtime"

${cidir}/setup_env_ubuntu.sh || true
${cidir}/install_kata_image.sh && ${cidir}/install_kata_kernel.sh && ${cidir}/install_qemu.sh
166 changes: 166 additions & 0 deletions scripts/lib.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
#!/bin/bash
#
# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

set -e

export KATA_RUNTIME=${KATA_RUNTIME:-cc}

# If we fail for any reason a message will be displayed
die(){
msg="$*"
echo "ERROR: $msg" >&2
exit 1
}

info() {
echo -e "INFO: $*"
}

function clone_repo() {
github_project="$1"
project_dir="${GOPATH}/src/${github_project}"

echo "Retrieve repository ${github_project}"
go get -d ${github_project} || true
}

function clone_and_build() {
github_project="$1"
make_target="$2"
project_dir="${GOPATH}/src/${github_project}"

echo "Retrieve repository ${github_project}"
go get -d ${github_project} || true

# fixme: once tool to parse and get branches from github is
# completed, add it here to fetch branches under testing

pushd ${project_dir}

echo "Build ${github_project}"
if [ ! -f Makefile ]; then
echo "Run autogen.sh to generate Makefile"
bash -f autogen.sh
fi

make

popd
}

function clone_build_and_install() {
clone_and_build $1 $2
pushd "${GOPATH}/src/${1}"
echo "Install repository ${1}"
sudo -E PATH=$PATH KATA_RUNTIME=${KATA_RUNTIME} make install
popd
}

function install_yq() {
GOPATH=${GOPATH:-${HOME}/go}
local yq_path="${GOPATH}/bin/yq"
local yq_pkg="github.com/mikefarah/yq"
[ -x "${GOPATH}/bin/yq" ] && return

case "$(arch)" in
"aarch64")
goarch=arm64
;;

"x86_64")
goarch=amd64
;;
"*")
echo "Arch $(arch) not supported"
exit
;;
esac

mkdir -p "${GOPATH}/bin"

# Workaround to get latest release from github (to not use github token).
# Get the redirection to latest release on github.
yq_latest_url=$(curl -Ls -o /dev/null -w %{url_effective} "https://${yq_pkg}/releases/latest")
# The redirected url should include the latest release version
# https://github.com/mikefarah/yq/releases/tag/<VERSION-HERE>
yq_version=$(basename "${yq_latest_url}")


local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_linux_${goarch}"
curl -o "${yq_path}" -L ${yq_url}
chmod +x ${yq_path}
}

function get_version(){
dependency="$1"
GOPATH=${GOPATH:-${HOME}/go}
# This is needed in order to retrieve the version for qemu-lite
install_yq >&2
runtime_repo="github.com/kata-containers/runtime"
runtime_repo_dir="$GOPATH/src/${runtime_repo}"
versions_file="${runtime_repo_dir}/versions.yaml"
mkdir -p "$(dirname ${runtime_repo_dir})"
[ -d "${runtime_repo_dir}" ] || git clone --quiet https://${runtime_repo}.git "${runtime_repo_dir}"
[ ! -f "$versions_file" ] && { echo >&2 "ERROR: cannot find $versions_file"; exit 1; }
result=$("${GOPATH}/bin/yq" read "$versions_file" "$dependency")
[ "$result" = "null" ] && result=""
echo "$result"
}


function apply_depends_on() {
pushd "${GOPATH}/src/${kata_repo}"
label_lines=$(git log --format=%s%b master.. | grep "Depends-on:" || true)
if [ "${label_lines}" == "" ]; then
popd
return 0
fi

nb_lines=$(echo "${label_lines}" | wc -l)

repos_found=()
for i in $(seq 1 "${nb_lines}")
do
label_line=$(echo "${label_lines}" | sed "${i}q;d")
label_str=$(echo "${label_line}" | awk '{print $2}')
repo=$(echo "${label_str}" | cut -d'#' -f1)
if [[ "${repos_found[@]}" =~ "${repo}" ]]; then
echo "Repository $repo was already defined in a 'Depends-on:' tag."
echo "Only one repository per tag is allowed."
return 1
fi
repos_found+=("$repo")
pr_id=$(echo "${label_str}" | cut -d'#' -f2)

echo "This PR depends on repository: ${repo} and pull request: ${pr_id}"
if [ ! -d "${GOPATH}/src/${repo}" ]; then
go get -d "$repo" || true
fi

pushd "${GOPATH}/src/${repo}"
echo "Fetching pull request: ${pr_id} for repository: ${repo}"
git fetch origin "pull/${pr_id}/head" && git checkout FETCH_HEAD && git rebase origin/master
popd
done

popd
}

function waitForProcess(){
wait_time="$1"
sleep_time="$2"
cmd="$3"
while [ "$wait_time" -gt 0 ]; do
if eval "$cmd"; then
return 0
else
sleep "$sleep_time"
wait_time=$((wait_time-sleep_time))
fi
done
return 1
}
68 changes: 68 additions & 0 deletions scripts/setup_env_ubuntu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
#!/bin/bash
#
# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

set -e

cidir=$(dirname "$0")
source "/etc/os-release"
source "${cidir}/lib.sh"

echo "Update apt repositories"
sudo -E apt update

echo "Install chronic"
sudo -E apt install -y moreutils

echo "Install kata containers dependencies"
chronic sudo -E apt install -y libtool automake autotools-dev autoconf bc alien libpixman-1-dev coreutils

echo "Install qemu dependencies"
chronic sudo -E apt install -y libcap-dev libattr1-dev libcap-ng-dev librbd-dev

echo "Install kernel dependencies"
chronic sudo -E apt install -y libelf-dev

echo "Install CRI-O dependencies for all Ubuntu versions"
chronic sudo -E apt install -y libglib2.0-dev libseccomp-dev libapparmor-dev \
libgpgme11-dev go-md2man thin-provisioning-tools

echo "Install bison binary"
chronic sudo -E apt install -y bison

echo "Install libudev-dev"
chronic sudo -E apt-get install -y libudev-dev

echo "Install Build Tools"
sudo -E apt install -y build-essential python pkg-config zlib1g-dev

echo -e "Install CRI-O dependencies available for Ubuntu $VERSION_ID"
sudo -E apt install -y libdevmapper-dev btrfs-tools util-linux

if [ "$VERSION_ID" == "16.04" ]; then
echo "Install os-tree"
sudo -E add-apt-repository ppa:alexlarsson/flatpak -y
sudo -E apt update
fi

sudo -E apt install -y libostree-dev

echo "Install YAML validator"
sudo -E apt install -y yamllint

echo "Install tools for metrics tests"
sudo -E apt install -y smem jq

if [ "$(arch)" == "x86_64" ]; then
echo "Install Kata Containers OBS repository"
obs_url="http://download.opensuse.org/repositories/home:/katacontainers:/release/xUbuntu_$(lsb_release -rs)/"
sudo sh -c "echo 'deb $obs_url /' > /etc/apt/sources.list.d/kata-containers.list"
curl -sL "${obs_url}/Release.key" | sudo apt-key add -
sudo -E apt-get update
fi

echo -e "Install cri-containerd dependencies"
sudo -E apt install -y libseccomp-dev libapparmor-dev btrfs-tools make gcc pkg-config
13 changes: 7 additions & 6 deletions types/persist.pb.go
2 changes: 1 addition & 1 deletion vendor/github.com/Azure/go-ansiterm/parser.go
202 changes: 202 additions & 0 deletions vendor/github.com/clearcontainers/proxy/COPYING
84 changes: 84 additions & 0 deletions vendor/github.com/clearcontainers/proxy/api/doc.go
235 changes: 235 additions & 0 deletions vendor/github.com/clearcontainers/proxy/api/frame.go
183 changes: 183 additions & 0 deletions vendor/github.com/clearcontainers/proxy/api/payload.go
Loading