~eliasnaur/gio#125: 
Multimedia support

Is there a support or method for playing audio and video?

Thank you

Status
REPORTED
Submitter
~adamseraj
Assigned to
No-one
Submitted
a month ago
Updated
7 days ago
Labels
No labels applied.

~adamseraj a month ago

???

~eliasnaur a month ago

No direct support, no. Depending on platform you can embed Gio with a native widget that can play audio/video.

~joe-getcouragenow 20 days ago

I am also looking to implement video playback and webrtc video conf (https://github.com/pion)

Could we not inject frames onto a gio canvas directly ?

~eliasnaur 19 days ago

On Thu Jun 18, 2020 at 22:17, ~joe-getcouragenow wrote:

I am also looking to implement video playback and webrtc video conf (https://github.com/pion)

Could we not inject frames onto a gio canvas directly ?

We could, but that's unlikely to be performant. Depending on your platform and needs there are at least two options:

  1. Embed a platform-specific widget for decoding and displaying video. Gio will need some way to refer to the native widget, and display it alongside Gio widgets.
  2. Some platforms have facilities for decoding video content directly into GPU textures that Gio could display, similar to an ImageOp.

-- elias

~joe-getcouragenow 18 days ago

Ok thanks for the response @eliasnur

Thanks for the tip about ImageOP. I will keep digging :)

~adamseraj 17 days ago

Thanks for the clarification

~dejadeja9 15 days ago

I am trying to work for a POC video player using ffmpeg (which should be able to provide a usable solution at least for the desktops). Currently it uses ImageOp to show the picture once the frame is decoded and converted to RGBA using libswscale. The CPU usage is a little high but it does look working. I think not requiring a native widget is a very appealing feature for gio and I am wondering if gio can provide native support for YUV images (image.YCbCr) so that some conversion / buffer copying can be avoided and the performance is on par with players like VLC/Mplayer. I am not sure which is the right approach, a new similar op similar to ImageOp or extending ImageOp. ~eliasnaur can you provide some guidance here? Ideally it should be a sticky image and the Y, Cb, Cr pixels can be refreshed. This is some related sample OpenGL code: http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c

If you can layout the skeleton of how the change can happen in gioui I may be able to take a stab.

FWIW I also built a small audio player sample using beep. Source code is here: https://github.com/dejadejade/giox/tree/master/examples/player

~eliasnaur 14 days ago

On Tue Jun 23, 2020 at 14:29, ~dejadeja9 wrote:

I am trying to work for a POC video player using ffmpeg (which should be able to provide a usable solution at least for the desktops). Currently it uses ImageOp to show the picture once the frame is decoded and converted to RGBA using libswscale. The CPU usage is a little high but it does look working. I think not requiring a native widget is a very appealing feature for gio

Good point, thank you.

and I am wondering if gio can provide native support for YUV images (image.YCbCr) so that some conversion / buffer copying can be avoided and the performance is on par with players like VLC/Mplayer.

I assume VLC/MPlayer can decode directly to a GPU texture, avoiding the CPU=>GPU copy? (That doesn't detract from your point of having cross-platform video).

I am not sure which is the right approach, a new similar op similar to ImageOp or extending ImageOp. ~eliasnaur can you provide some guidance here? Ideally it should be a sticky image and the Y, Cb, Cr pixels can be refreshed.

From the documentation on NewImageOp:

NewImageOp creates an ImageOp backed by src. See
gioui.org/io/system.FrameEvent for a description of when data referenced by
operations is safe to re-use.

NewImageOp assumes the backing image is immutable, and may cache a copy of
its contents in a GPU-friendly way. Create new ImageOps to ensure that
changes to an image is reflected in the display of it.

So you just need a new ImageOp created with NewImageOp every time your backing image changes.

Note the reuse comment: you can't touch the backing pixels after calling FrameEvent.Frame until you receive another FrameEvent. In practice that means you need a double-buffer scheme where you write to one image while the other is processed by the GPU.

This is some related sample OpenGL code: http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c

If you can layout the skeleton of how the change can happen in gioui I may be able to take a stab.

I'm definitely not an expert on YUV images, but I believe you need the following:

  • Add a special case to NewImageOp for image.YCbCr
  • Add a decodeYCbCr to gpu/shaders/common.inc
  • Add a materialYCbCr materialType variant to gpu/gpu.go and handle it similarly to materialTexture. As I read the OpenGL example, you'll need multiple textures per YUV image.
  • Add the materialYBrCr variant as a FetchColorExpr variant to internal/cmd/convertshaders/main.go
  • rename materialTexture to, say, materialRGBA.

Thank you for working on this.

FWIW I also built a small audio player sample using beep. Source code is here: https://github.com/dejadejade/giox/tree/master/examples/player

Nice. I'm looking forward to your complete Gio video player!

-- elias

~dejadeja9 14 days ago

Thanks ~eliasnaur! This is very helpful.

I am new to OpenGL and I am not clear how exactly VLC/Mplyer copies the pixel data to a texture. I thought there is some sort of memory mapping between GPU/CPU and thus the term sticky part as opposed to copying/uploading the data as ImageOp. But having a YUV ImageOp can still reduce the CPU usage significantly I assume.

My understanding is that we will need to upload 3 buffers to OpenGL. I am wondering if it means I can just update renderer.textHandle and renderer.drawOps and adding two lines in below?

case materialTexture:
          r.ctx.BindTexture(0, r.texHandle(m.texture))
      }

I will dig into the code and come back with more questions. Thanks!

~eliasnaur 14 days ago

On Thu Jun 25, 2020 at 12:15 AM CEST, ~dejadeja9 wrote:

Thanks ~eliasnaur! This is very helpful.

I am new to OpenGL and I am not clear how exactly VLC/Mplyer copies the pixel data to a texture. I thought there is some sort of memory mapping between GPU/CPU and thus the term sticky part as opposed to copying/uploading the data as ImageOp. But having a YUV ImageOp can still reduce the CPU usage significantly I assume.

You can map of GPU accessible memory to save a copy, but not with the ImageOp interface. Let's save that for after YUV support.

My understanding is that we will need to upload 3 buffers to OpenGL. I am wondering if it means I can just update renderer.textHandle and renderer.drawOps and adding two lines in below?

case materialTexture:
          
r.ctx.BindTexture(0, r.texHandle(m.texture))
      }

I believe you'll have to create 3 handles to 3 single-channel textures and bind them all.

Search in the code for materialTexture; I believe there are other places it is special-cased.

Perhaps the best approach is to add materialYCbCr, and make it behave just like materialTexture, and then start changing it to match the format.

-- elias

~dejadeja9 13 days ago

Thanks. Here is what I get so far. Still not working and missing shaders/useProgram() etc, but wanted to check if this is what you'd expect. Please disregard the naming it will be updated as you suggested when it works (e.g., yuv to ycbcr etc). And I think renaming RGBA stuff should come in as a separate patch.

A few notes: 1. A new format TextureFormatYUV and UploadPixelData() is needed in the backend? 2. A new field yuvtexture is added to material, and a corresponding enum value 3. A new field yuv is added to imageOpData 4. A new structure yuvtexture to hold the 3 backend texture handles 5. ImageOp changed to use image.Image

questions: 1. Where to use the shader program? when creating the handles?

diff --git a/gpu/backend/backend.go b/gpu/backend/backend.go
index 871eea4..5531f21 100644
--- a/gpu/backend/backend.go
+++ b/gpu/backend/backend.go
@@ -151,6 +151,7 @@ type Timer interface {
 
 type Texture interface {
    Upload(img *image.RGBA)
+   UploadPixelData(pix []byte, width, height int)
    Release()
 }
 
@@ -176,6 +177,7 @@ const (
 const (
    TextureFormatSRGB TextureFormat = iota
    TextureFormatFloat
+   TextureFormatYUV
 )
 
 const (
diff --git a/gpu/gpu.go b/gpu/gpu.go
index 48a6334..b71067b 100644
--- a/gpu/gpu.go
+++ b/gpu/gpu.go
@@ -111,6 +111,9 @@ type material struct {
    color f32color.RGBA
    // For materialTypeTexture.
    texture *texture
+   // For yuvTypeTexture
+   yuvtexture *yuvtexture
+
    uvTrans f32.Affine2D
 }
 
@@ -123,6 +126,7 @@ type clipOp struct {
 type imageOpData struct {
    rect   image.Rectangle
    src    *image.RGBA
+   yuv    *image.YCbCr
    handle interface{}
 }
 
@@ -146,16 +150,29 @@ func (op *clipOp) decode(data []byte) {
    }
 }
 
-func decodeImageOp(data []byte, refs []interface{}) imageOpData {
+func decodeImageOp(data []byte, refs []interface{}) (mtype materialType, imgData imageOpData) {
    if opconst.OpType(data[0]) != opconst.TypeImage {
        panic("invalid op")
    }
    handle := refs[1]
    if handle == nil {
-       return imageOpData{}
+       return materialColor,imageOpData{}
    }
    bo := binary.LittleEndian
-   return imageOpData{
+
+   var src *image.RGBA
+   var yuv *image.YCbCr
+   if s, ok := refs[0].(*image.RGBA); ok {
+       src = s
+       mtype = materialTexture
+   } else if y, ok := refs[0].(*image.YCbCr); ok {
+       yuv = y
+       mtype = yuvTexture
+   } else {
+       panic("invalid image type")
+   }
+
+   imgData = imageOpData{
        rect: image.Rectangle{
            Min: image.Point{
                X: int(bo.Uint32(data[1:])),
@@ -166,9 +183,12 @@ func decodeImageOp(data []byte, refs []interface{}) imageOpData {
                Y: int(bo.Uint32(data[13:])),
            },
        },
-       src:    refs[0].(*image.RGBA),
+       //      src:    refs[0].(*image.RGBA),
+       src:    src,
+       yuv:    yuv,
        handle: handle,
    }
+   return
 }
 
 func decodeColorOp(data []byte) color.RGBA {
@@ -214,6 +234,11 @@ type texture struct {
    tex backend.Texture
 }
 
+type yuvtexture struct {
+   src *image.YCbCr
+   ytex, utex, vtex backend.Texture
+}
+
 type blitter struct {
    ctx         backend.Device
    viewport    image.Point
@@ -274,6 +299,7 @@ const (
 const (
    materialColor materialType = iota
    materialTexture
+   yuvTexture
 )
 
 func New(ctx backend.Device) (*GPU, error) {
@@ -396,12 +422,54 @@ func (r *renderer) texHandle(t *texture) backend.Texture {
    return t.tex
 }
 
+func (r *renderer) yuvHandles(t *yuvtexture) (backend.Texture, backend.Texture, backend.Texture) {
+   if t.ytex != nil && t.utex != nil && t.vtex != nil {
+       return t.ytex,t.utex,t.vtex
+   }
+
+   w, h := t.src.Bounds().Dx(), t.src.Bounds().Dy()
+   tex, err := r.ctx.NewTexture(backend.TextureFormatYUV, w, h, backend.FilterLinear, backend.FilterLinear, backend.BufferBindingTexture)
+   if err != nil {
+       panic(err)
+   }
+   tex.UploadPixelData(t.src.Y, w, h)
+   t.ytex = tex
+
+   tex, err = r.ctx.NewTexture(backend.TextureFormatYUV, w, h, backend.FilterLinear, backend.FilterLinear, backend.BufferBindingTexture)
+   if err != nil {
+       panic(err)
+   }
+   tex.UploadPixelData(t.src.Cb, w/2, h/2)
+   t.utex = tex
+
+   tex, err = r.ctx.NewTexture(backend.TextureFormatYUV, w, h, backend.FilterLinear, backend.FilterLinear, backend.BufferBindingTexture)
+   if err != nil {
+       panic(err)
+   }
+   tex.UploadPixelData(t.src.Cr, w/2, h/2)
+   t.vtex = tex
+
+   return t.ytex,t.utex,t.vtex
+}
+
 func (t *texture) release() {
    if t.tex != nil {
        t.tex.Release()
    }
 }
 
+func (t *yuvtexture) release() {
+   if t.ytex != nil {
+       t.ytex.Release()
+   }
+   if t.utex != nil {
+       t.utex.Release()
+   }
+   if t.vtex != nil {
+       t.vtex.Release()
+   }
+}
+
 func newRenderer(ctx backend.Device) *renderer {
    r := &renderer{
        ctx:     ctx,
@@ -759,8 +827,8 @@ loop:
            state.matType = materialColor
            state.color = decodeColorOp(encOp.Data)
        case opconst.TypeImage:
-           state.matType = materialTexture
-           state.image = decodeImageOp(encOp.Data, encOp.Refs)
+           //          state.matType = materialTexture
+           state.matType, state.image = decodeImageOp(encOp.Data, encOp.Refs)
        case opconst.TypePaint:
            op := decodePaintOp(encOp.Data)
            // Transform (if needed) the painting rectangle and if so generate a clip path,
@@ -847,10 +915,15 @@ func (d *drawState) materialFor(cache *resourceCache, rect f32.Rectangle, off f3
        m.material = materialColor
        m.color = f32color.RGBAFromSRGB(d.color)
        m.opaque = m.color.A == 1.0
-   case materialTexture:
-       m.material = materialTexture
+   case materialTexture, yuvTexture:
+       m.material = d.matType
        dr := boundRectF(rect.Add(off))
-       sz := d.image.src.Bounds().Size()
+       var sz image.Point
+       if d.image.src != nil {
+           sz = d.image.src.Bounds().Size()
+       } else {
+           sz = d.image.yuv.Bounds().Size()
+       }
        sr := layout.FRect(d.image.rect)
        if dx := float32(dr.Dx()); dx != 0 {
            // Don't clip 1 px width sources.
@@ -868,13 +941,25 @@ func (d *drawState) materialFor(cache *resourceCache, rect f32.Rectangle, off f3
        }
        tex, exists := cache.get(d.image.handle)
        if !exists {
-           t := &texture{
-               src: d.image.src,
+           if d.matType == materialTexture {
+               t := &texture{
+                   src: d.image.src,
+               }
+               cache.put(d.image.handle, t)
+               tex = t
+           } else {
+               t := &yuvtexture{
+                   src: d.image.yuv,
+               }
+               cache.put(d.image.handle, t)
+               tex = t
            }
-           cache.put(d.image.handle, t)
-           tex = t
        }
-       m.texture = tex.(*texture)
+       if d.matType == materialTexture {
+           m.texture = tex.(*texture)
+       } else {
+           m.yuvtexture = tex.(*yuvtexture)
+       }
        uvScale, uvOffset := texSpaceTransform(sr, sz)
        m.uvTrans = trans.Mul(f32.Affine2D{}.Scale(f32.Point{}, uvScale).Offset(uvOffset))
    }
@@ -892,6 +977,11 @@ func (r *renderer) drawZOps(ops []imageOp) {
        switch m.material {
        case materialTexture:
            r.ctx.BindTexture(0, r.texHandle(m.texture))
+       case yuvTexture:
+           ytex, utex, vtex := r.yuvHandles(m.yuvtexture)
+           r.ctx.BindTexture(0, ytex)
+           r.ctx.BindTexture(1, utex)
+           r.ctx.BindTexture(2, vtex)
        }
        drc := img.clip
        scale, off := clipSpaceTransform(drc, r.blitter.viewport)
@@ -912,6 +1002,11 @@ func (r *renderer) drawOps(ops []imageOp) {
        switch m.material {
        case materialTexture:
            r.ctx.BindTexture(0, r.texHandle(m.texture))
+       case yuvTexture:
+           ytex, utex, vtex := r.yuvHandles(m.yuvtexture)
+           r.ctx.BindTexture(0, ytex)
+           r.ctx.BindTexture(1, utex)
+           r.ctx.BindTexture(2, vtex)
        }
        drc := img.clip
 
diff --git a/op/paint/paint.go b/op/paint/paint.go
index b4a9a56..2460b94 100644
--- a/op/paint/paint.go
+++ b/op/paint/paint.go
@@ -24,7 +24,8 @@ type ImageOp struct {
 
    uniform bool
    color   color.RGBA
-   src     *image.RGBA
+// src     *image.RGBA
+   src     image.Image
 
    // handle is a key to uniquely identify this ImageOp
    // in a map of cached textures.
@@ -60,9 +61,9 @@ func NewImageOp(src image.Image) ImageOp {
            uniform: true,
            color:   col,
        }
-   case *image.RGBA:
+   case *image.RGBA, *image.YCbCr:
        bounds := src.Bounds()
-       if bounds.Min == (image.Point{}) && src.Stride == bounds.Dx()*4 {
+       if bounds.Min == (image.Point{}) { // && src.Stride == bounds.Dx()*4 {
            return ImageOp{
                Rect:   src.Bounds(),

~eliasnaur 12 days ago

On Thu Jun 25, 2020 at 14:14, ~dejadeja9 wrote:

Thanks. Here is what I get so far. Still not working and missing shaders/useProgram() etc, but wanted to check if this is what you'd expect. Please disregard the naming it will be updated as you suggested when it works (e.g., yuv to ycbcr etc). And I think renaming RGBA stuff should come in as a separate patch.

Some of the patch needs revision, but it looks like yoǘre on the right track, except for TextureFormatYUV. As far as I read your example code, there isn't a YUV texture format per se. Instead, the example uses the single-channel GL_LUMINANCE format. See line 147 of

http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c

A few notes: 1. A new format TextureFormatYUV and UploadPixelData() is needed in the backend? 2. A new field yuvtexture is added to material, and a corresponding enum value 3. A new field yuv is added to imageOpData 4. A new structure yuvtexture to hold the 3 backend texture handles 5. ImageOp changed to use image.Image

questions: 1. Where to use the shader program? when creating the handles?

I'm very sorry, but I don't have the bandwidth for defailed help.

As I mentioned before, my own strategy for doing large changes to unfamiliar code is that I take as small steps as possible, verifying I didn't break anything at each step.

In this case, I'd make it the first milestone to make yuvTexture materialType exactly the same as materialTexture, and only then make YUV changes. That means duplicating switch cases, shaders programs and so on.

Another strategy I use when I'm not even sure what needs changing, is to make the fewest possible changes to see something working. In this case that means modifying materialTexture to act like materialYUV, and only when you see something works, split materialTexture into YUV and RGBA variants.

diff --git a/gpu/backend/backend.go 
b/gpu/backend/backend.go
index 871eea4..5531f21 100644
--- 
a/gpu/backend/backend.go
+++ b/gpu/backend/backend.go
@@ -151,6 +151,7
 @@ type Timer interface {
 
 type Texture interface {
  Upload(img 
*image.RGBA)
+ UploadPixelData(pix []byte, width, height int)
  
Release()
 }
 
@@ -176,6 +177,7 @@ const (
 const (
  
TextureFormatSRGB TextureFormat = iota
  TextureFormatFloat
+ 
TextureFormatYUV
 )
 
 const (
diff --git a/gpu/gpu.go b/gpu/gpu.go

index 48a6334..b71067b 100644
--- a/gpu/gpu.go
+++ b/gpu/gpu.go
@@ 
-111,6 +111,9 @@ type material struct {
  color f32color.RGBA
  // For
 materialTypeTexture.
  texture *texture
+ // For yuvTypeTexture
+ 
yuvtexture *yuvtexture
+
  uvTrans f32.Affine2D
 }
 
@@ -123,6 
+126,7 @@ type clipOp struct {
 type imageOpData struct {
  rect   
image.Rectangle
  src    *image.RGBA
+ yuv    *image.YCbCr
  handle 
interface{}
 }
 
@@ -146,16 +150,29 @@ func (op *clipOp) decode(data 
[]byte) {
  }
 }
 
-func decodeImageOp(data []byte, refs 
[]interface{}) imageOpData {
+func decodeImageOp(data []byte, refs 
[]interface{}) (mtype materialType, imgData imageOpData) {
  if 
opconst.OpType(data[0]) != opconst.TypeImage {
      panic("invalid op")

  }
  handle := refs[1]
  if handle == nil {
-     return 
imageOpData{}
+     return materialColor,imageOpData{}
  }
  bo := 
binary.LittleEndian
- return imageOpData{
+
+ var src *image.RGBA
+ 
var yuv *image.YCbCr
+ if s, ok := refs[0].(*image.RGBA); ok {
+     src 
= s
+     mtype = materialTexture
+ } else if y, ok := 
refs[0].(*image.YCbCr); ok {
+     yuv = y
+     mtype = yuvTexture
+ } 
else {
+     panic("invalid image type")
+ }
+
+ imgData = imageOpData{

      rect: image.Rectangle{
          Min: image.Point{
              X: 
int(bo.Uint32(data[1:])),
@@ -166,9 +183,12 @@ func decodeImageOp(data 
[]byte, refs []interface{}) imageOpData {
              Y: 
int(bo.Uint32(data[13:])),
          },
      },
-     src:    
refs[0].(*image.RGBA),
+     //      src:    refs[0].(*image.RGBA),
+     src:
    src,
+     yuv:    yuv,
      handle: handle,
  }
+ return
 }
 
 
func decodeColorOp(data []byte) color.RGBA {
@@ -214,6 +234,11 @@ type 
texture struct {
  tex backend.Texture
 }
 
+type yuvtexture struct 
{
+ src *image.YCbCr
+ ytex, utex, vtex backend.Texture
+}
+
 type 
blitter struct {
  ctx         backend.Device
  viewport    
image.Point
@@ -274,6 +299,7 @@ const (
 const (
  materialColor 
materialType = iota
  materialTexture
+ yuvTexture
 )
 
 func 
New(ctx backend.Device) (*GPU, error) {
@@ -396,12 +422,54 @@ func (r 
*renderer) texHandle(t *texture) backend.Texture {
  return t.tex
 }

 
+func (r *renderer) yuvHandles(t *yuvtexture) (backend.Texture, 
backend.Texture, backend.Texture) {
+ if t.ytex != nil && t.utex != nil
 && t.vtex != nil {
+     return t.ytex,t.utex,t.vtex
+ }
+
+ w, h := 
t.src.Bounds().Dx(), t.src.Bounds().Dy()
+ tex, err := 
r.ctx.NewTexture(backend.TextureFormatYUV, w, h, backend.FilterLinear, 
backend.FilterLinear, backend.BufferBindingTexture)
+ if err != nil {

+     panic(err)
+ }
+ tex.UploadPixelData(t.src.Y, w, h)
+ t.ytex = tex

+
+ tex, err = r.ctx.NewTexture(backend.TextureFormatYUV, w, h, 
backend.FilterLinear, backend.FilterLinear, 
backend.BufferBindingTexture)
+ if err != nil {
+     panic(err)
+ }
+ 
tex.UploadPixelData(t.src.Cb, w/2, h/2)
+ t.utex = tex
+
+ tex, err =
 r.ctx.NewTexture(backend.TextureFormatYUV, w, h, backend.FilterLinear, 
backend.FilterLinear, backend.BufferBindingTexture)
+ if err != nil {

+     panic(err)
+ }
+ tex.UploadPixelData(t.src.Cr, w/2, h/2)
+ t.vtex 
= tex
+
+ return t.ytex,t.utex,t.vtex
+}
+
 func (t *texture) 
release() {
  if t.tex != nil {
      t.tex.Release()
  }
 }
 
+func 
(t *yuvtexture) release() {
+ if t.ytex != nil {
+     t.ytex.Release()

+ }
+ if t.utex != nil {
+     t.utex.Release()
+ }
+ if t.vtex != nil 
{
+     t.vtex.Release()
+ }
+}
+
 func newRenderer(ctx 
backend.Device) *renderer {
  r := &renderer{
      ctx:     ctx,
@@ 
-759,8 +827,8 @@ loop:
          state.matType = materialColor
          
state.color = decodeColorOp(encOp.Data)
      case opconst.TypeImage:
-
          state.matType = materialTexture
-         state.image = 
decodeImageOp(encOp.Data, encOp.Refs)
+         //          state.matType = 
materialTexture
+         state.matType, state.image = 
decodeImageOp(encOp.Data, encOp.Refs)
      case opconst.TypePaint:
          
op := decodePaintOp(encOp.Data)
          // Transform (if needed) the 
painting rectangle and if so generate a clip path,
@@ -847,10 +915,15 
@@ func (d *drawState) materialFor(cache *resourceCache, rect 
f32.Rectangle, off f3
      m.material = materialColor
      m.color = 
f32color.RGBAFromSRGB(d.color)
      m.opaque = m.color.A == 1.0
- case 
materialTexture:
-     m.material = materialTexture
+ case 
materialTexture, yuvTexture:
+     m.material = d.matType
      dr := 
boundRectF(rect.Add(off))
-     sz := d.image.src.Bounds().Size()
+     var 
sz image.Point
+     if d.image.src != nil {
+         sz = 
d.image.src.Bounds().Size()
+     } else {
+         sz = 
d.image.yuv.Bounds().Size()
+     }
      sr := layout.FRect(d.image.rect)

      if dx := float32(dr.Dx()); dx != 0 {
          // Don't clip 1 px width 
sources.
@@ -868,13 +941,25 @@ func (d *drawState) materialFor(cache 
*resourceCache, rect f32.Rectangle, off f3
      }
      tex, exists := 
cache.get(d.image.handle)
      if !exists {
-         t := &texture{
-             
src: d.image.src,
+         if d.matType == materialTexture {
+             t := 
&texture{
+                 src: d.image.src,
+             }
+             
cache.put(d.image.handle, t)
+             tex = t
+         } else {
+             t := 
&yuvtexture{
+                 src: d.image.yuv,
+             }
+             
cache.put(d.image.handle, t)
+             tex = t
          }
-         
cache.put(d.image.handle, t)
-         tex = t
      }
-     m.texture = 
tex.(*texture)
+     if d.matType == materialTexture {
+         m.texture = 
tex.(*texture)
+     } else {
+         m.yuvtexture = tex.(*yuvtexture)
+     }

      uvScale, uvOffset := texSpaceTransform(sr, sz)
      m.uvTrans = 
trans.Mul(f32.Affine2D{}.Scale(f32.Point{}, uvScale).Offset(uvOffset))

  }
@@ -892,6 +977,11 @@ func (r *renderer) drawZOps(ops []imageOp) {

      switch m.material {
      case materialTexture:
          
r.ctx.BindTexture(0, r.texHandle(m.texture))
+     case yuvTexture:
+         
ytex, utex, vtex := r.yuvHandles(m.yuvtexture)
+         r.ctx.BindTexture(0,
 ytex)
+         r.ctx.BindTexture(1, utex)
+         r.ctx.BindTexture(2, vtex)

      }
      drc := img.clip
      scale, off := clipSpaceTransform(drc, 
r.blitter.viewport)
@@ -912,6 +1002,11 @@ func (r *renderer) 
drawOps(ops []imageOp) {
      switch m.material {
      case 
materialTexture:
          r.ctx.BindTexture(0, r.texHandle(m.texture))
+     
case yuvTexture:
+         ytex, utex, vtex := r.yuvHandles(m.yuvtexture)
+
          r.ctx.BindTexture(0, ytex)
+         r.ctx.BindTexture(1, utex)
+         
r.ctx.BindTexture(2, vtex)
      }
      drc := img.clip
 
diff --git 
a/op/paint/paint.go b/op/paint/paint.go
index b4a9a56..2460b94 100644

--- a/op/paint/paint.go
+++ b/op/paint/paint.go
@@ -24,7 +24,8 @@ type
 ImageOp struct {
 
  uniform bool
  color   color.RGBA
- src     
*image.RGBA
+//   src     *image.RGBA
+ src     image.Image
 
  // 
handle is a key to uniquely identify this ImageOp
  // in a map of 
cached textures.
@@ -60,9 +61,9 @@ func NewImageOp(src image.Image) 
ImageOp {
          uniform: true,
          color:   col,
      }
- case 
*image.RGBA:
+ case *image.RGBA, *image.YCbCr:
      bounds := 
src.Bounds()
-     if bounds.Min == (image.Point{}) && src.Stride == 
bounds.Dx()*4 {
+     if bounds.Min == (image.Point{}) { // && src.Stride 
== bounds.Dx()*4 {
          return ImageOp{
              Rect:   src.Bounds(),

View on the web: https://todo.sr.ht/~eliasnaur/gio/125#comment-11895

~eliasnaur referenced this from #125 12 days ago

~dejadeja9 12 days ago

Thanks! Yes TextureFormatYUV will be translated into GL_LUMINANCE when calling TextImage2D. Or do you prefer a different name?

I totally agree with your thinking wrt the strategies and I think it boils down to when to use an image.Image interface v.s. RGBA/YCbCr, and where to put the 3 texture backend handles (either add it to texture or introduce a new yuvtexture). Either way it should be straightforward, but the code changes probably won't be very small. I'll revisit the changes.

I should have made my question more specific though. I felt like I'd need to changes to blitter.blit which will trigger more changes to things like uniforms in below (e.g., introducing 3 samplers for the YUV textures, and possibly a new yuvType for semi-planar use cases like nv12), and the createColorPrograms() function etc. And I haven't done the research on how to chain the existent shaders with the new yuv2rgb shader yet. So I was wondering if there is a less invasive approach similar to what is done in app/internal/srgb/srgb.go? Thus I wanted to check if you have any thoughts on that direction.

Meanwhile I'll try to see if I can make those invasive modifications to the blitter approach. Thanks.

case materialTexture:
        t1, t2, t3, t4, t5, t6 := uvTrans.Elems()
        b.texUniforms.vert.blitUniforms.uvTransformR1 = [4]float32{t1, t2, t3, 0}
        b.texUniforms.vert.blitUniforms.uvTransformR2 = [4]float32{t4, t5, t6, 0}
        uniforms = &b.texUniforms.vert.blitUniforms
    }

~eliasnaur 11 days ago

On Sat Jun 27, 2020 at 02:23, ~dejadeja9 wrote:

Thanks! Yes TextureFormatYUV will be translated into GL_LUMINANCE when calling TextImage2D. Or do you prefer a different name?

I'd prefer something that refers to the data type, not the use (in which case I wouldn't have thought your code wrong). TextureFormatLuminance is better for lack of a better name.

I should have made my question more specific though. I felt like I'd need to changes to blitter.blit which will trigger more changes to things like uniforms in below (e.g., introducing 3 samplers for the YUV textures, and possibly a new yuvType for semi-planar use cases like nv12), and the createColorPrograms() function etc. And I haven't done the research on how to chain the existent shaders with the new yuv2rgb shader yet. So I was wondering if there is a less invasive approach similar to what is done in app/internal/srgb/srgb.go? Thus I wanted to check if you have any thoughts on that direction.

I don't see a way to do it transparently like sRGB emulation, sorry. If there were a builtin texture format for video decoding, then maybe.

-- elias

~dejadeja9 11 days ago

OK it turns out the change might not be as big as thought. Below are the changes to the shaders and its generator. Is this what you'd expect?

I am still working through the blitter changes but the dots seem to get connected. I have one quick question if you dont mind: where/when is the uniform for tex sampler being set? Or it is bound to 0 all the time so no setting is needed? (which means I don't need to update the uniforms for ytex, utex, vtex either?)

Thanks!

Header:         `layout(binding=0) uniform sampler2D tex;`

A new material type is added to convertshaders/main.go

+                       {
+                               FetchColorExpr: `yuv2rgb(ytex, utex, vtex, vUV)`,
+                               Header: `
+                               layout(binding=0) uniform sampler2D ytex;
+                               layout(binding=1) uniform sampler2D utex;
+                               layout(binding=2) uniform sampler2D vtex;
+                               `,
+                       },

A new yuv2rgb() function is added to common.inc

+vec4 yuv2rgb(in sampler2D ytex, in sampler2D utex, in sampler2D vtex, in vec2 vUV) {
+       vec3 rgb, yuv;
+       yuv.r = texture(ytex, vUV).r;
+       yuv.g = texture(utex, vUV).r - 0.5;
+       yuv.b = texture(vtex, vUV).r - 0.5;
+
+       rgb = mat3(     1.0,     1.0,           1.0,
+                               0.0,    -0.39465,       2.03211,
+                               1.13983,-0.58060,       0.0) * yuv;
+       return vec4(rgb, 1.0);
+}

~eliasnaur 11 days ago

On Sat Jun 27, 2020 at 6:17 PM CEST, ~dejadeja9 wrote:

OK it turns out the change might not be as big as thought. Below are the changes to the shaders and its generator. Is this what you'd expect?

Seems right to me, except for two questions.

I am still working through the blitter changes but the dots seem to get connected. I have one quick question if you dont mind: where/when is the uniform for tex sampler being set? Or it is bound to 0 all the time so no setting is needed? (which means I don't need to update the uniforms for ytex, utex, vtex either?)

You shouldn't need to set sampler values. They're either set automatically by the binding=X layout quialifiers, or (for older OpenGLs) automatically by the backend:

https://git.sr.ht/~eliasnaur/gio/tree/master/gpu/gl/backend.go#L453

Thanks!

Header:
         `layout(binding=0) uniform sampler2D tex;`

A new material type is added to convertshaders/main.go

+
                       {
+                               
FetchColorExpr: `yuv2rgb(ytex, utex, vtex, vUV)`,
+
                               Header: `
+
                               layout(binding=0) uniform sampler2D ytex;

+                               layout(binding=1) uniform sampler2D 
utex;
+                               layout(binding=2) uniform 
sampler2D vtex;
+                               `,
+
                       },

A new yuv2rgb() function is added to common.inc

+vec4 yuv2rgb(in sampler2D ytex, in sampler2D utex, 
in sampler2D vtex, in vec2 vUV) {
+       vec3 rgb, yuv;
+       yuv.r
 = texture(ytex, vUV).r;
+       yuv.g = texture(utex, vUV).r - 0.5;
+
       yuv.b = texture(vtex, vUV).r - 0.5;
+
+       rgb = mat3(     
1.0,     1.0,           1.0,
+                               0.0,    
-0.39465,       2.03211,
+                               
1.13983,-0.58060,       0.0) * yuv;
+       return vec4(rgb, 1.0);

Perhaps a vec3 result is clearer? Or can YUV images have alpha values (in which case you shouldn't return 1.0)?

~dejadeja9 11 days ago

Thanks. I just managed to display a YCbCr image natively. It is a little brighter than the original JPEG (I think it is due to the transform matrix because there are so many variants of YUV) but overall it works as expected.

I have yuv2rgb to return vec4 to make FetchColorExpr: yuv2rgb(ytex, utex, vtex, vUV) simpler. And I expect in the future yuv2rgb could get more complex (e.g., to handle other variants and with an alpha value, possibly though a uniform). So I think we can leave it as for now?

I should be able to clean up for a set of patches for review next week. There will be 3 parts: 1. Some changes to gl/backend.go to upload luminance data. See below. Are you ok with the name?

+func (t *gpuTexture) UploadLumninance(unit int, pix []byte, w, h int) {
+       t.backend.BindTexture(unit, t)
+       t.backend.funcs.TexImage2D(TEXTURE_2D, 0, t.triple.internalFormat, w, h, t.triple.format, t.triple.typ, pix)
+}
  1. The shaders and generator change as in previous comment.
  2. The changes to gpu.go as shown before.

Majority of the changes are additional (around 200 lines) and we may clean it up in the future. Stay tuned.

~dejadeja9 11 days ago

Correction: there is a typo and the unit parameter is not needed.

+func (t *gpuTexture) UploadLuminance(pix []byte, w, h int) {
+       t.backend.BindTexture(0, t)
+       t.backend.funcs.TexImage2D(TEXTURE_2D, 0, t.triple.internalFormat, w, h, t.triple.format, t.triple.typ, pix)
+}

~eliasnaur 10 days ago

On Sat Jun 27, 2020 at 11:03 PM CEST, ~dejadeja9 wrote:

Thanks. I just managed to display a YCbCr image natively. It is a little brighter than the original JPEG (I think it is due to the transform matrix because there are so many variants of YUV) but overall it works as expected.

Great!

I have yuv2rgb to return vec4 to make FetchColorExpr: yuv2rgb(ytex, utex, vtex, vUV) simpler. And I expect in the future yuv2rgb could get more complex (e.g., to handle other variants and with an alpha value, possibly though a uniform). So I think we can leave it as for now?

If we leave alpha for later we should ensure that an image.YCbCr with alpha != 1.0 using the RGBA fallback-path.

I should be able to clean up for a set of patches for review next week. There will be 3 parts: 1. Some changes to gl/backend.go to upload luminance data. See below. Are you ok with the name?

Sure. Let's see all 3 patches and I'll do a detailed review.

+func (t *gpuTexture) UploadLumninance(unit int, pix 
[]byte, w, h int) {
+       t.backend.BindTexture(unit, t)
+       
t.backend.funcs.TexImage2D(TEXTURE_2D, 0, t.triple.internalFormat, w, h,
 t.triple.format, t.triple.typ, pix)
+}

~dejadeja9 9 days ago

Two patches are sent. The changes on gpu.go are still being cleaned up and should be ready later this week. After applying patch 1 you will need to regenrate shaders.go using go generate.

I changed UploadLuminance to UploadPixels to make it more generic as this interface can also be used to upload other formats of pixels.

I am new to sourcehut and the format could be a little messed up (apologies for that) but I hope it is enough for review.

~dejadeja9 9 days ago

The 3rd patch is also sent. I am still not sure if imageOpData should use an image.Image or two variable (src for existing RGBA interface and yuv for the new one). Currently it is the latter and it doesn't look elegant.

~eliasnaur 9 days ago

Thank you. I'll wait for the entire patchset before reviewing so I have the whole picture. Are you able to take a stab at the Windows Direct3D backend for luminance textures? Unfortunately, you'll need a Windows installation to build HLSL versions of the shaders.

Please also add a YUV test to internal/rendertest.

~dejadeja9 9 days ago

The full set of changes has been sent to the patch mailing list. I'll take a look at the D3D backend. Creating and uploading the pixels seems quite similar but I am not clear on where to render the 3 textures together yet. Will dig into the code a little bit. To your previous question on alpha: YUVs don't have an alpha component abased on my understanding. That is why the golang's image/color component also hard codes a 0xffff alpha value. See (c YCbCr) RGBA().

Also worth noting the coefficients in the transform matrix is based on http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c and may need some adjustments in the future. I tried a few other matrixs and the results are similar so probably not a big deal for now.

~dejadeja9 7 days ago

I just tested a bit on Windows 10 and managed to make it work too. It turned out the change is much smaller than expected thanks to the abstraction. I will submit a patch soon.

Register here or Log in to comment, or comment via email.