Skip to content

Tutorial for common image adjustments? #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
fbuchinger opened this issue Apr 2, 2013 · 82 comments
Closed

Tutorial for common image adjustments? #59

fbuchinger opened this issue Apr 2, 2013 · 82 comments

Comments

@fbuchinger
Copy link

I'm new to libvips and want to perform common image corrections like the adjustment of white balance, hue, brightness or tonal range. Are there tutorials/documentations covering these adjustments? I'm also interested in applying filters like gaussian blur etc. with libvips

@jcupitt
Copy link
Member

jcupitt commented Apr 2, 2013

Good idea, I'll try to write a blog post about it.

In Python, you can adjust brightness / contrast etc. with simple arithmetic. For example:

y = x * 1.2 

Brightens by 20%. You can use array constants to adjust bands separately, eg.:

y = x * [1, 2, 1]

will multiply the G channel of an RGB image by two, and leave other channels untouched.

Move the image to another colour space to do things like adjusting chroma. For example:

y = x.colourspace("lch") * [1, 1.5, 1]

turns an sRGB image into LCh (lightness, chroma, hue) and boosts chroma.

@jcupitt
Copy link
Member

jcupitt commented Apr 2, 2013

For gaussian blur:

y = x.gaussblur(2)

@fbuchinger
Copy link
Author

thanks a lot! I'll start my first experiments with this information and get in touch again when i need more information.

@jcupitt
Copy link
Member

jcupitt commented Apr 3, 2013

I guess you found the manual as well?

https://libvips.github.io/pyvips/

@mogadanez
Copy link

How can I split image to tiles with vips command line? not dzsave, just tiling in original size.

@jcupitt
Copy link
Member

jcupitt commented Nov 29, 2013

You can do it with dzsave, in fact, with the --depth flag. For example:

$ vips dzsave --depth one wtc.png x
$ ls x_files/
0
$ ls x_files/0/
0_0.jpeg 0_2.jpeg 1_1.jpeg 2_0.jpeg 2_2.jpeg 3_1.jpeg 4_0.jpeg 4_2.jpeg
0_1.jpeg 1_0.jpeg 1_2.jpeg 2_1.jpeg 3_0.jpeg 3_2.jpeg 4_1.jpeg

And it just generates the first (0th) layer of the pyramid.

@rmrio
Copy link

rmrio commented Mar 6, 2014

How can i apply bicubic or bilinear interpolation to image after/due resizing via affine?

@jcupitt
Copy link
Member

jcupitt commented Mar 6, 2014

Try something like:

$ vips affine wtc.png wtc2.png "1.5 0 0 1.5" --interpolate bicubic

Or in Ruby:

big = tile.affinei_resize(:bicubic, 8)

@rmrio
Copy link

rmrio commented Mar 6, 2014

Thanks a lot, but i need to doing this in python. It's possible?

@jcupitt
Copy link
Member

jcupitt commented Mar 6, 2014

Try:

y = x.affine([1.5, 0, 0, 1.5], interpolate=pyvips.Interpolate.new("bicubic"))

There are other interpolators, try:

$ vips list classes | grep -i interpolate
  VipsInterpolate (interpolate), VIPS interpolators
    VipsInterpolateNearest (nearest), nearest-neighbour interpolation
    VipsInterpolateBilinear (bilinear), bilinear interpolation
    VipsInterpolateBicubic (bicubic), bicubic interpolation (Catmull-Rom)
    VipsInterpolateLbb (lbb), reduced halo bicubic
    VipsInterpolateNohalo (nohalo), edge sharpening resampler with halo reduction
    VipsInterpolateVsqbs (vsqbs), B-Splines with antialiasing smoothing
      im_point - interpolate value at single point
      im_point_bilinear - interpolate value at single point, linearly

@rmrio
Copy link

rmrio commented Mar 7, 2014

Thanks John, its worked on the latest vips builds.

How can I increase the quality of the image when scaling down (making thumbnails)? I tried to use different interpolators, but it did not help.

@jcupitt
Copy link
Member

jcupitt commented Mar 8, 2014

What kind of quality problems are you seeing? vipsthumbnail does a block-shrink to the size above the target, then uses affine for a finishing step to get the final size.

@rmrio
Copy link

rmrio commented Mar 11, 2014

For example let's get the image http://static.selectel.ru/wp-content/uploads/2014/02/PR-410-2-5.png
After applying affinei_all function with "bicubic" interpolator for this image we will get the following results
affinei
I want to get a more smoother result.
Thanks for your help John.

@jcupitt
Copy link
Member

jcupitt commented Mar 11, 2014

Affine works by transforming each output point back to the input space then running the selected interpolator to predict the image value at that point. Bicubic uses a 4x4 stencil and only the centre 2x2 have a large effect on the value, so therefore affine will produce aliasing (the error you are seeing) if you decrease the image size by more than a factor of two.

To do large shrinks, you need to block-shrink down to a size above your target, then use affine with bicubic to size to exactly your target. This is what vipsthumbnail does, and there's a version in ruby-vips as well here:

https://github.com/jcupitt/ruby-vips/blob/master/examples/thumbnail.rb

You can also use techniques like EWA, but block + affine is much faster and almost equivalent.

@rmrio
Copy link

rmrio commented Mar 11, 2014

Which is python alternative to ruby tile_cache() from the example?

@jcupitt
Copy link
Member

jcupitt commented Mar 11, 2014

Try:

im = im.cache(im.Xsize(), 1, 100)

Though performance might not be great. You might need to not do the streaming stuff.

The Python API needs redoing, it's due for replacement as part of vips8.

@ericcorriel
Copy link

I've been searching online for hours but some guidance would be appreciated. I'm creating large images programmatically but I'm not sure how to create a VImage from scratch, as it were, and populate it with my own data.

Imagine I have a large array of rgb values with (x,y) coordinates. What's the best way to add these values to an image whose size will exceed the 256MB that I can currently store in memory?

@jcupitt
Copy link
Member

jcupitt commented Dec 28, 2015

Hi @ericcorriel, a very simple way to do this is to use vips_xyz() to make an image whose pixel values are their own coordinates.

For example, in Python you can write:

#!/usr/bin/python

import sys
from gi.repository import Vips

# make an image where band 0 is the x coordinate and band 1 the y coordinate
im = Vips.Image.xyz(int(sys.argv[1]), int(sys.argv[1]))

# move the origin to the centre
im = im - [im.width / 2, im.height / 2]

# a one-band image where pixels are distance from the centre
im = (im[0] ** 2 + im[1] ** 2) ** 0.5

# relational operations make uchar images with 0 for false, 255 for true
im = im < im.width / 3

im.write_to_file(sys.argv[2])

And now run:

$ ./circle.py 200 circle.tif

and circle.tif should be a one-band 200 x 200 pixel tif image with a white circle in the centre.

It'll work for very large images, for example on my laptop I see:

$ time ./circle.py 20000 circle.tif
real    0m18.755s
user    1m8.280s
sys 0m0.520s

That uses all the cores, runs in about 100mb of ram, and creates a 400mb file. You could get the ram down a bit if you used fewer cores.

The vips blog has some examples, perhaps:

http://libvips.blogspot.co.uk/2015/11/fancy-transforms.html

vips is in homebrew, I guess you saw?

@jcupitt
Copy link
Member

jcupitt commented Dec 28, 2015

Or if you have an in-memory array you've made in some other way, you can wrap it up as a vips image with new_from_memory. This will not make a copy of the memory area, so you'll need to keep it alive as long as vips is using it.

@kleisauke
Copy link
Member

kleisauke commented May 28, 2016

I'm trying to improve the Node.js module sharp. See this feature request: lovell/sharp#435

I want to trim pixels according to the transparency levels of the given overlay image. Whenever the overlay image is opaque, the original is shown, and wherever the overlay is transparent, the result will be transparent as well.

Example: We have this image and this overlay and we want this output.

How should I write a function in C++ to accomplish this? Are these steps the correct/fastest (in terms of speed) way to do it?

  • First 'convert' the opaque pixels to white (if it's not already white) and then the transparent pixels to black. (Using the ifthenelse operation(?)).
  • After the black-white mask is created, use the libvips morphology. (I'm not sure which operation).

A C++ example is greatly appreciated!

Update:
The and operator (&) seems to work (only if the circle in the example is white on a transparent background). But it has some weird black pixels (with transparency) around the edge.

This is the output if we use the & operator. Compare it against the output we want leads to this:
Compare
The second image is 0.60% different compared to the first (according to Resemble.js).

Is this an anti-aliasing issue?

@jcupitt
Copy link
Member

jcupitt commented May 29, 2016

& is bitwise AND, so you can't use it to blend between two images, it'll do crazy things on intermediate values.

If you're going to be making PNG output, just put your transparency mask into the alpha channel. First, extract the RGB from the input image with extract_band, then add your mask as the new alpha with bandjoin.

There are some annoying cases to consider: images can be mono, RGB or CMYK, they can have optional alpha channels, and they can be 8-bit, 16-bit or float. I made you a sample program:

https://gist.github.com/jcupitt/15324d7cec7ece0c63439c518b191baa

vips has fast things to make circles and squares of any size, they might be a better way to make masks. Loading png overlays will be very, very slow.

http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-create.html#vips-mask-gaussian

@SuhairZain
Copy link

I'm sorry for hijacking this thread, but I have spend an entire day searching for ways to do this, but I can't find a way to do it. I hope someone here can help me, it'd be a great favor.

I need to read an image, and apply a function individually to each pixel, which basically changes the [R, G, B] pixel to [R-G, G-R, R-B]. I've been doing this by looping over each pixel but this seems to be extremely inefficient. Is there any way I can use something like colorspace conversion or a filter in order to do this?

@jcupitt
Copy link
Member

jcupitt commented Mar 31, 2017

Sure, in Python just write:

image = Vips.Image.new_from_file(filename)
r, g, b = image.bandsplit()
R = r - g
G = g - r
B = r - b
image = R.bandjoin([G, B])
image.write_to_file("x.tif")

If you want to be clever, you could do it with a recombination:

http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-conversion.html#vips-recomb

I think the matrix:

1 -1 0
-1 1 0
1 0 -1

would do what you want.

That'll save a float image with negative values, you'd need to think about how to handle that.

@SuhairZain
Copy link

I'm going to give that a try. It's unfortunate that currently the only node interface for libvips is via sharp and they only provide a subset of the functions offered by libvips python. I'm going to give it a try in Python and let you know. Thanks a lot for the help. 😄

@jcupitt
Copy link
Member

jcupitt commented Mar 31, 2017

There are some other node.js bindings:

https://github.com/jcupitt/libvips/issues/103

But I don't know if any of them became production-ready.

@jcupitt
Copy link
Member

jcupitt commented Mar 31, 2017

I meant to add, please open a new issue if you have another question, it's fine to ask questions as issues, we're using it as a forum.

@SuhairZain
Copy link

SuhairZain commented Apr 3, 2017

Hey @jcupitt,
The code works flawlessly and it flies. It runs instantly while my previous code using another library takes ~10sec per image. This library is simply amazing. 😄

I have opened and saved a file from NodeJS using node-ffi. I wrote the binding code for 3 functions in order to do this (vips_init, vips_image_new_from_file, vips_image_write_to_file). I was trying to replicate the Python code you share in C and I was unable to find the bandsplit(). Could you tell me how to do this in C, along with the channel subtraction code that you shared?

@jcupitt
Copy link
Member

jcupitt commented Apr 4, 2017

Hi, that sounds very cool! node-ffi is good stuff.

bandsplit is defined in python:

https://github.com/jcupitt/libvips/blob/master/python/packages/gi/overrides/Vips.py#L905

Which (in turn) uses the override for []:

https://github.com/jcupitt/libvips/blob/master/python/packages/gi/overrides/Vips.py#L905

ie. it builds an array with extract_band, something like:

result = []
for i in range(0, image.bands):
  result.append(image.extract_band(i))

@jcupitt
Copy link
Member

jcupitt commented Apr 4, 2017

You can call the C API directly, for example vips_extract_band():

http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/libvips-conversion.html#vips-extract-band

But the C API is supposed to be nice for C programmers, it's not designed to be easy to call from other languages. For example, it makes heavy use of varargs, which is difficult to bind safely.

I think the best solution would be to use the layer below: the thing that the C API calls. This gobject layer supports full introspection: you can write a general function which can invoke any operation in libvips. If you write this one function, you then get the whole libvips API.

I did a PHP binding recently like this. There's a 1,500 line C module you install that adds vips_call, a single php function that can run any vips operation:

https://github.com/jcupitt/php-vips-ext/blob/master/vips.c

It adds a couple of other functions too. Then there's a larger layer over that, all in php, that makes a nice php API:

https://github.com/jcupitt/php-vips

That's about 1000 lines of hand-written docs, 1000 lines of hand-written PHP, and 3000 lines of automatically-generated phpdoc.

So I think I would reimplement the C stuff above in JS using node-ffi, then write a little wrapper to make it nice to use, then auto-generate everything else .It should only be a few days work, and you'd have a complete, dynamic libvips binding that kept itself up to date as the library expanded.

There are some more notes on binding here:

http://www.vips.ecs.soton.ac.uk/supported/current/doc/html/libvips/binding.html

They should be expanded a bit.

@korya
Copy link

korya commented Nov 25, 2019

Hi there. This thread seems to be the helpline for libvips. I am sorry if that's not the case.

Anyway, I need to convert a 360 equirectangular image into 6 cubic tiles. It should be something similar to https://jaxry.github.io/panorama-to-cubemap/. I have the code to perform the transformation manually in C++ and GO. The code works OK. But recently I discovered libvips and I was wondering how to do it in libvips and whether this kind of conversion is going to be more efficient using libvips. I would love to see any examples implementing some similar functionality.

Thanks.

@jcupitt
Copy link
Member

jcupitt commented Nov 25, 2019

Hello @korya,

I'd use an xy image (each pixel has the value of is it's own xy coordinate), then write an expression for each of your six projections. You can use mapim to apply the transform to your source image with whatever interpolator you like.

There's an example here:

http://libvips.blogspot.com/2015/11/fancy-transforms.html

And some others higher in this thread.

I don't know what performance would be like. OK, probably, but perhaps not great. You can generate the transform at a lower res and upsample, which can give a nice speedup as long as the rate of change isn't too high.

I should update that post for current pyvips.

@kayarre
Copy link

kayarre commented Nov 26, 2019

I am using the output of libvips to perform a registration task, have a bunch of cropped image dimensions that I use to create source images and use the embed method to put the cropped source image into the larger target image. This part works fantastic!

The challenge is that the background has a lot of off white, and so using extend="VIPS_EXTEND_WHITE" creates artificial edges that registration doesn't like, as well as the extend="VIPS_EXTEND_COPY" creates some weirdness in the corners, and makes streaky looking lines when there is a noticeable difference in sizes from the source image to the target image.

either I would like to average those weird edges or somehow blur them to remove and sharp gradients.

here is an example:
image

also note, not all the images have as much red, others look like:
image

I just had a magical Idea!

I can create the new image with embeding and then apply guassian blur and then re-embed the same image!, I up for other ideas as well. Thank you so much for libvips and the pyvips bindings!

I see there is also draw_smudge, which i may try as well.

@jcupitt
Copy link
Member

jcupitt commented Nov 27, 2019

Yes, I would take a square from a corner, replicate that to make something the right size, blur it and insert the input image on top. In Python (untested):

image = pyvips.Image.new...

target_width = image.width * 2
target_height = image.width * 2
corner = image.crop(0, 0, 100, 100)
background = corner \
    .replicate(1 + target_width / 100, 1 + target_height / 100) \
    .crop(0, 0, target_width, target_height) \
    .gaussblur(3)
result = background.insert(image, (target_width - image.width) / 2, (target_height - image.height) / 2)

Don't use draw_smudge, it's just for paintbox programs.

@jcupitt
Copy link
Member

jcupitt commented Nov 27, 2019

That code should be quick, since libvips will only compute the pixels it needs. It won't blur the whole image, just the edge area.

I guess you could maybe blur the corner and then replicate that, but you might get visible seams if you don't blur enough.

You'll have a seam where the image overlays the background -- your registration might be upset by that too. If necessary, you could feather the edge.

@CanadianHusky
Copy link

Hello,

I can execute this code and boost saturation of a RGB image in the HSV colorspace with 1550 ms execution time

b = a.SRGB2HSV().Linear({1.0, 3.0, 1.0}, {0.0, 0.0, 0.0}).HSV2sRGB

what am I gaining by using Lab2Lch, because the documentation says...

HSV is a crude polar coordinate system for RGB images. It is provided for compatibility with other image processing systems. See vips_Lab2LCh() for a much better colour space

The post above from April 2, 2013 suggests to use
b = a.sRGB2XYZ().XYZ2Lab().Lab2LCh()
but that must be outdated because there is no direct conversion sRGB2XYZ() anymore in the current Netvips binding.

The figure VIPS colour spaces interconvert confirms this as well that there is no direct conversion from sRGB2XYZ
https://libvips.github.io/libvips/API/current/libvips-colour.html

Therefore the long route is now
b = a.SRGB2scRGB().ScRGB2XYZ.XYZ2Lab.Lab2LCh.Linear({1.0, 3.0, 1.0}, {0.0, 0.0, 0.0}).LCh2Lab.Lab2XYZ.XYZ2scRGB.ScRGB2sRGB
it executes in 5000 ms, about 330% slower than the RGB2HSV method.
I did not see a worthwhile benefit in the output result of doing 3 layers of colourspace conversions and another 3 layers back to RGB, when its can be done with HSV in a single step.
What am I missing ?
thank you

@CanadianHusky
Copy link

another question is about black thresholding

a = rgb image
t = threshold (0 to 255)
any pixel less than rgb(t,t,t) should turn black (0,0,0)
any pixel more than rgb(t,t,t) should stay same as original
the below executes and is mostly correct up to a certain value

b = (a < {t, t, t}).Ifthenelse({0, 0, 0}, a)

t= 90
image

t=128 - why is the function attacking the red color ?
image

t=160 - why is red being changed when it should have been copied from the original image (left) ?
image

@jcupitt
Copy link
Member

jcupitt commented Mar 16, 2020

Hello, boosting saturation in HSV won't give good results: some colours will be boosted a lot, some will hardly change, most will change brightness as well as saturation.

I would use:

y = x.colourspace("lch") * [1, 3, 1]

colourspace will use the best series of conversions to move from the current colourspace to the target.

LCh is a polar space, ie. boosting C moves colours out in a straight line from the neutral axis. You can get the same effect in Lab by scaling a and b equally:

y = x.colourspace("lab") * [1, 3, 3]

And you'll save the rectangular -> polar conversion.

If you're OK with HSV and its problems, of course use that.

@jcupitt
Copy link
Member

jcupitt commented Mar 16, 2020

The line:

rgb_image < [1, 2, 3]

Will give a three-band boolean image with 255 in band 0 where it is less than 1, and so on for the other bands.

I think you need to think about what you want RGB < something to mean. Do you want to select certain colours, or are you really interested in brightness? If you want to select some colours, perhaps you'll need to AND the bands together afterwards. If it's brightness, I would go to mono first.

@jcupitt
Copy link
Member

jcupitt commented Mar 16, 2020

... AND bands together with y = x.bandand() in Python. There's EOR and OR as well.

@CanadianHusky
Copy link

This pseudo code will neutralize gray, basically make r=g=b, if they are close enough to each other within a threshold, in an rgb image.
How would this best be implemented with libvips ?

p1(x,y) = max(r,g,b) = a pixel value 0-255
p2(x,y) = min(r,g,b)

The libvip functions min and max gives me the image minimum, maximum values, not at the individal pixel level. I think this case requries a bandsplit operation as well but I am not sure how to do the comparison against a threshold without being forced to use external looping code which is painfully slow (but works)

rgb = original image
t = threshold value (int), res = result image
if p1-p2 > t then res = rgb (copy original as is) else res = (r+b+g)/3 (average of rgb written into all 3 r,g,b channels

example with numbers, at any point x,y
r = 190
g = 200
b = 195
t = 15
max = 200 : min = 190
delta = 10 < t
output pixel = (190+200+195)/3 = 195 for all 3 channels rgb(195,195,195)

r = 170
g = 200
b = 195
t = 15
max = 200 : min = 170
delta = 30 > t
output pixel = same as original

thank you

@jcupitt
Copy link
Member

jcupitt commented Mar 16, 2020

I'd do that in LCh. Just look for C < 10 (for example) and set C 0 in that case.

l, c, h = image.colourspace("lch").bandsplit()
c = (c < 10).ifthenelse(0, c)
image = l.bandjoin([c, h])

@jcupitt
Copy link
Member

jcupitt commented Mar 16, 2020

... or use HSV I suppose if you're not worried about accuracy.

@CanadianHusky
Copy link

works fantastic. thank you

@CanadianHusky
Copy link

How can I convert from Lch or HSV colorspace back to RGB colorspace while staying in Memory (no disc file) ?

This works

vipsimg = Image.NewFromFile(inputfile, True, access:=Enums.Access.Sequential) 
result = vipsimg.Colourspace("lch") * {1, 2, 1} 

if I save to disc now, the file is OK but I need to read it again from disc which causes loss in performance

result = result.Colourspace("rgb", "lch") '
<--- this crashes with error message

unable to call colourspace vips_colourspace: no known route from 'lch' to 'rgb

Why?
I need to do additional processing in RGB colorspace and prefer not to write back to disc for performance. Cache in memory is Ok if there is no other way
Can someone shed some light on this please

@jcupitt
Copy link
Member

jcupitt commented Apr 14, 2020

You need srgb, not rgb.

You can reuse result if you want, but remember that writing more than once will mean more than one pass over the input image, so you won't be able to use sequential mode.

You don't need to give two args to colourspace: it knows what colourspace an image is in, so you just need to give the destination.

Try (python):

im = pyvips.Image.new_from_file(sys.argv[1])
im = im.colourspace("lch") * [1, 2, 1]
im.write_to_file("x.png")
im = im.colourspace("srgb") * [1, 1.5, 1]
im.write_to_file("y.png")

Now x.png will be very saturated, and y.png will be very saturated and also very green.

You could go a little quicker: that code will compute the saturation boost twice -- once when writing x.png and a second time when executing the pipeline for y.png. If you render the saturated image to memory, you can avoid the recomputation:

im = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
im = im.colourspace("lch") * [1, 2, 1]
im = im.copy_memory()
im.write_to_file("x.png")
im = im.colourspace("srgb") * [1, 1.5, 1]
im.write_to_file("y.png")

Now the second pipeline will run from the memory copy to y.png, so you can turn sequential mode back on for the original load. Of course, this will need more RAM.

@CanadianHusky
Copy link

Thank you for the detailed explanation, the sat boost was only one of the operations in a chain of 5 or 6 operations.
srgb conversion works without error now
The output of a previous step in the pipeline acts as input for the next step. The sequence of commands matters and the colorspace they are executed matters as well.

All pipline commands are executed by assignments back to the original image
for example

img_input = img_input.Colourspace("hsv") * {1, param, 1}
img_input = img_input.Gamma(param)
img_input = (img_input.Colourspace("b-w") < param).Ifthenelse({0, 0, 0}, img_input, False)

only 1 single img.write_To_file is executed at the end.
As far as I understood your design, this should be Ok to stay in sequential access mode

I will check performance if loss of sequential mode access or copy to memory makes more sense.

what exacly does this do?
im = im.copy_memory()
I am guessing it creates an in-memory clone with the same colorspace as the original ?
thank you

@jcupitt
Copy link
Member

jcupitt commented Apr 14, 2020

Yes, y = x.copy_memory() allocates a big chunk of memory and executes the pipeline that x represents to fill it with pixels. It wraps the memory chunk up as a new image and points y at it.

@CanadianHusky
Copy link

Hello,

Is it possible chain a few well choosen morph or conv commands together to 'fill in the blanks', even for curved paths ? meaning try to complete the missing pixels in a scanned image like this ?

image

I experiemented with the rank filter and am able to make pixels stronger (thicken lines), but it works either in all directions or only horizontal or only vertical depending on the parameters used.

I suspect something like this requires a more complex algorithm

@jcupitt
Copy link
Member

jcupitt commented Apr 21, 2020

I think you'd usually use a rotating mask -- so make something to detect horizontal lines, then rotate it 4 or eight ways and take the maximum response. There's a thing to rotate a conv mask 45 degrees.

@jcupitt
Copy link
Member

jcupitt commented May 22, 2020

Looking back over this issue, I think we should close it. Many of the examples are no longer correct.

Please open a new issue if you have a new question.

@angstyloop
Copy link

This might be helpful. Did I do this right @jcupitt? Am I doing anything silly?

Black and white thresholding with VIPS in C.

https://gist.github.com/angstyloop/37d4454442beea452b718bb11469f2a4

@jcupitt
Copy link
Member

jcupitt commented Mar 5, 2023

Nice! Though you can do this with built-in operators, see: libvips/ruby-vips#243 (comment)

@angstyloop
Copy link

Thanks for not two but three replies to the same question!! Your example was super helpful - I see how simple it is to create "boolean" images in VIPS now using the built-in relational operations.

Also, hanging stuff off a "dummy" VipsObject so it gets unreffed automatically is a great pattern for standalone examples and small programs. I'm definitely going to start using that, which will save me from relying on vim so much to type "boiler plate" code.

Worth saying though that I don't really mind the boiler plate - it's pretty obvious what to type next (most of the time), and there clear patterns to follow. VIPS just feels like a natural extension of GLib. Being able to do things concisely in C is icing on the cake!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests