Age Verification
This website contains age-restricted material including nudity and explicit content. By entering, you confirm being at least 18 years old or the age of majority in the jurisdiction you are accessing the website from.
I am 18+ or older - Enter
I am under 18 - Exit
Our parental controls page explains how you can easily block access to this site.

Discussions for Scenes for Version 1.2.X Fullscreen Mode here

  Forum / Alles über iStripper

EverthangForever
Mitglied seit in Oct 2009

2459 Beiträge
2. January 2018 (edited)
After a bit of thinking i don't think it will actually work unless the girl isn't moving much due to the low framerate of the clips.
Thats true, it would be relative & speed of movement dependent.
I haven't worried about pursuing 3D which needs glasses because there has always been a tad too much public resistance to wearing them. I'm waiting for the team at MIT to develop their automultiscopic displays which use algorithms to convert stereo input for glasses-free 3D.
https://www.engadget.com/2017/07/12/mit-solves-a-major-problem-holding-up-glasses-free-3d-tvs/
Nevertheless, Its amazing how fiddling with enough short view opacity animations ( like some bloated code we didn't post during the anim gif frame conversions in .scn ) can give you a fairly flicker free contiguous effect in sampler textures in 2D.

On the paucity of frame rate issue, I would have thought that the overall frame rate would be doubled if both the original and the time lagged were superimposed in the same space, Assuming each eye needs about 25 fps for contiguity, we would still need some way of delaying the (initiation) presentation of one side by some multiple of that. I don't think a shader setting could lag that as an afterthought so to speak. Its more like Totem giving us a lag function in their fullscreen GUI pertaining to individual clipsprite initiation. Alternatively, it could be experimented with by using two computers starting clip at slightly different times to mux the display offset onto each shutter eye. Almost worth opening another account just to try it & see what eventuates.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
3. January 2018
you might be able to use u_elapsed to set up a copy. something like

If ((u_elapsed) == skip)
{
girlB.rgba = girlA.rgba
}
if((u_elapsed) > skip)
{
skip = u_elapsed + 1/30th of a second( not sure what u-elapsed actually returns )
}
then i think B would need to be copied to another buffer(girlC) just before it gets overwritten by the first copy. so skip - 0.001 or something.
then use girlC as the output for one half of the screen and the normal clip for the other.

it may need to pass out the value for skip so it doesn't get reset each frame and it may need to pass out the girlB to be passed back in next frame. i think there is a multitexturebuffer that can hold the buffer between shaders without having to pass it about as a source.

I used the u_elapsed something like that when i was trying to make a delay to recreate the mirror scene from the last jedi. but mine was dropping frames instead of delaying.

The trick with the empty feedbackbuffer gives a 1 frame delay (see my first acid ) but that wouldn't work for 3d but may be usefull for motion interpolation(way beyond me atm)
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
3. January 2018 (edited)
You can't output anything like "skip" from a fragment shader but you could use a vertex shader. Really such things should be done even earlier in the graphics pipeline but the only hooks into it that we have are at the vertex and fragment shader stages. At least I don't think that Totem have provided anything else though something corresponding to the "compute" stage would be useful - currently all we have at that stage are the animate clauses. Extensions could be made to the scene description language used in the .scn files to (which would execute on the CPU) or to allow them to invoke "compute shaders" that would run on the GPU but only be run once per frame.

Very roughly a compute shader can produce "uniform" variables which have the same value at every vertex and hence at every pixel, vertex shaders can produce "varying" variables which vary from vertex to vetex and the value of which at any pixel is interpolated from the three vertices of the triangle in which the pixel lies. Fragment shaders can not produce any variables as output except for the pixel colour and opacity and the depth in the Z buffer (and even it it could update a global vraiable there is nothing that could read it). You could use the colour output to hold something other than a colour and use the resultant frame buffer in another shader but for a simple scalar like "skip" that would be rather silly as it would be calculated seperately at each pixel even though it only needs to be calculated once and used at each pixel.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
3. January 2018
Ahh, i was suggesting using a varying for the skip value. smeg. In that case is it possible to store the value in one pixel of the output and have that output be an input to the same shader next frame. I know you can pass the values through different shaders by converting it to rgb so...
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
3. January 2018 (edited)
Yes it is possible to use frame buffers to pass values like that - its seems "wrong" because they are not images but in reality there is nothing wrong about it other than being rather inefficient when you are passing a single value. It is probably better to use a vertex shader where you can define a varying variable and give it a value - that way you only compute the same value once per vertex rather than once per pixel - i.e. a few tens to a few hundreds of times rather than a few thousands or a few millions of times. iStripper does support vertex shaders.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
3. January 2018
So are you saying i could create the skip value inside a vertex shader and pass it out as a varying to be passed into the fragment shader?
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
3. January 2018 (edited)
@Z22 - yes you can do exactly that. I have not used it for anything, but I have tested that it works. Just declare a varying variable rather than a uniform one and assign a value to it in a vertex shader, you can then use it in a subsequent fragment shader. If you assign different values to different vertices then the value is interpolated between the three vertices of the triangle in which the pixel being worked on in the fragment shader lies.

As yet very few vertex shaders used in scenes but they use the same shader language as fragment shaders, though with different restrictions on what you can do in them. The few examples that have been used so far have had the file extension .vsh, but this is purely an iStripper users convention,
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
3. January 2018 (edited)
@Z22 - here is a very simple vertex shader from an old Totem supplied scene. It just moves the vertices in a time dependant way to crudely simulate waving grass (it was operating on an image of blades of grass). This is the normal use of vertex shaders. I am not certain what, if any, of the outputs from this shader have to be provided by any vertex shader but you can add as many extra varying variables as you want and they can be simple floats or vecs as in the example (iit makes little sense to output ints or bools because of the way they would be interpolated between vertices, but I suppose they could be used if they were needed).

// Coordonnees de texture et couleur pour le fragment

varying vec4 gl_TexCoord[];
varying vec4 gl_FrontColor;

uniform float u_Elapsed;

void main()
{
vec4 displacedVertex;
displacedVertex = gl_Vertex;

float len = length( displacedVertex );

displacedVertex.x += 5.0 * sin( u_Elapsed + len );
//displacedVertex.y += 2.0 * cos(u_Elapsed + len );

// Position sur l'ecran
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * displacedVertex;

// Position dans la texture
gl_TexCoord[0] = gl_MultiTexCoord0;

gl_FrontColor = gl_Color;

}

I suppose a minimal vertex shader would be

varying vec4 gl_TexCoord[];
varying vec4 gl_FrontColor;
void main()
{
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
3. January 2018
I forgot to say that you can control the number of vertices used for a quad, sprite or clipSprite by using a resolution: clause in the relevent node in the .scn file, e.g.

resolution : 200 // use 200 vertices for this sprite.

I think the default is 4 - one at each corner which divides the sprite into two triangles via one of its diagonals.
Wyldanimal
MODERATOR
Mitglied seit in Mar 2008

3909 Beiträge
3. January 2018

I forgot to say that you can control the number of vertices used for a quad, sprite or clipSprite by using a resolution: clause in the relevent node in the .scn file, e.g.

resolution : 200 // use 200 vertices for this sprite.

I think the default is 4 - one at each corner which divides the sprite into two triangles via one of its diagonals.

This might also affect the resolution of the clipplane:

I'll have to test this again...
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
Ahhh... had messed about with the vertex shader on the pool scene.

I wondered if there was a way to increase the vertex count, with the resolution setting to something really high i could then offset the z of the vertex by the colour (rgb combined) to get cheating depth. may be able to get 3d that way. or at least a lumpy quad.. :/
EverthangForever
Mitglied seit in Oct 2009

2459 Beiträge
4. January 2018 (edited)
@Guys,
Is there any way to time switch (swap) backgrounds in a .scn without depending on .scn sprite opacity or pos: animations ?? Could you say vertex render these via triangles onto a quad then make it disappear using some other type of pre-timed variables ? I could not get .scn camera node to animate clipplane at all. We still haven't settled on how many parameter values has a clipplane: or what they mean :-/
http://www.istripper.com/forum/thread/27449/54?post=562513
Alles über iStripper / Discussions for Scenes for Version 1.2.X Fullscreen Mode here
@TheEmu, I had no joy exploring settings for Camera node's clipplane parameters either regarding them as x, y, z pixels or as an angles format. The only settings which centred the model against a full...
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018 (edited)
@ET
Psudocode

if(( sin(u_elapsed*multiplier) ) > 0.5)
{
output = texture 1;
}
else
{
output = texture 2;
}

gl_FragColor = output


that should do it. you can alter how fast it switches with a multiplier of u_elapsed. eg:- u_elapsed*120.0 is about 60fps or 120fps, cant remember... so if you want every 30 secs it would be more like u_elapsed*0.1 or something...
EverthangForever
Mitglied seit in Oct 2009

2459 Beiträge
4. January 2018 (edited)
I had been moding a lot of early frag shaders here just using u_elapsed*pi*whatever in the header to slow them down a tad because hot-damn those shadertoy geeks were always running them too fast. Thanks lots @Z22 I will be sure to look into that..👍
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
4. January 2018 (edited)
There is no really convenient way to do it. Modulating opacity or position in a scene file works but gets cumbersome when more than a couple of images are involved (sometimes you can throw in animating rot, scale or size as well but not by themselves) and the effects that you can acheive are limited given the current options offered by animate.

You can do anything in a fragment shader - most easily by either selecting between up to four input textures as per Z22's pseudocode or by cross fading between them to get a more controlled transition (or using any transiton effect that you can think of). However a single shader is limited to handling four textures in this way which, depending on exactly what you want to acheive, may not be too awkward to do directly in the scene file. You could instead use multiple shaders (or one shader used muliple times with its behaviour controlled by a uniform variable) each acting on a single texture. That is probably the simplest and most flexible way to do it for more than a few backgrounds. However, it does mean running multiple shaders each acting on a large image (if we are talking about backgrounds) and that will quickly eat up your GPU resources if you have more than a small number of backgrounds handled this way.

Amonst many other things relating to fullscreen scenes I would like to see Totem add the following enhancements

Firstly, a way to specify that the image used for a texture is changed periodicly, something like

texture {
id : background
source : backgrounds/
period : 60
order : random
}

which would select a new image from the backgrounds folder every 60 seconds. Alternative(s) to order: random should allow the images to be used in some fixed order based on their names - simple alphabetic is all that would be needed because we could use names like BG-001, BG-002, BG-003 to control the order. It would be best if there was an option that controlled whether to restart or not after the last image had been displayed. Even better would be if it was possible to specify a more flexible schedule, e.g. 20 sec for the first image, 24 for the second, 10 for the next 5 etc. and to be able to specify how the transitions between images should be handled (cross fade, slide left, etc.). A change like this would provide both a basic random or sequential slideshow capability, would allow "stories" to be told in which the background changes in a more controlled manner and would allow a series of captions or other texts to be displayed.

Secondly, a way of delaying the start time for an animate: to take effect. Something as simple as an optional second time so that

animate : 20, 10, forward, ....

would mean start an animation lasting 20 seconds but delay it by 10 seconds. This simple change would make the opacity modulation mechanism I use in many of my scenes much more powerfull.

Thirdly, a way of using custom, user defined, easing curves in place of the fixed selection currently available. The basic Qt toolkit supports using custom curves and it would be very nice to be able to specify a curve as a table of value pairs in a scene file and then use that as an easing curve.
EverthangForever
Mitglied seit in Oct 2009

2459 Beiträge
4. January 2018 (edited)
@Theemu, I second that. Hope @ToTeam are listening 👍
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
I think they have bigger fish to fry tbh. I guess most of their time is taken up by bugfixes and the vr app, which needs loads of work to get it looking good.
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
4. January 2018 (edited)
@Z22 - I certainly don't expect Totem to do anything for scene creators, at least not in the foreseeable future, and those are only a few of around 50 or so ideas that I have and would like to see implemented. However, as the subject had come up I decided to mention these few which would help with the problems behind EverThang's question. I think, but do not know, that these would be simple to implement but that does not mean that I think that they ought to be given much of a priority. (If Totem would like to hire me to do it then I would be delighted to try.)

By far the most important thing to do with regards to full screen scenes is to tackle how they are listed in the GUI. The previous incarnation, VGHD, handled this better in that it provided a simple two level heirarchy and showed all of a long file name, the current version makes finding scenes when you have a lot (I currently have over 5000) much more difficult than it used to be.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
Yeh, i bet there are a shed load of improvements that could be made, but i suppose it really depends on how useful they are to the most users. I guess as there aren't many of us actually making scenes it's a low priority for them.

Personally for the desktop i would like to see some neural network additions like offline upscaling(variant of srgan) which would give far better quality upscales then they current ones, and realtime motion interpolation. They are quick and give really good results. I have made the suggestions before but they didn't gain much interest from the community.

TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
4. January 2018
Agreed - I have quite often commented on other peoples suggestions to say that although the suggestions may be good ideas in themselves they need to provide some tangible benefit to Totem and not to their customers, and especialy not to a small subset of customers, before Totem can be expected to have any interrest in implementing them. I have actualy been rather pleasantly surprised at how much Totem do for us that does not seem to me to offer Totem much in return - such as the very existance of any support for us creating scenes in the first place.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
I wonder how much we will be able to do with the vr version...
EverthangForever
Mitglied seit in Oct 2009

2459 Beiträge
4. January 2018 (edited)
I guess most of their time is taken up by bugfixes and the vr app,
@Z22, I don't think Totem are ***** up a lot of @Team priority time on vr app.
Who would want to go into competition with a Google or Samsung et al when the
cost of regular shooting, processing, marketing & and troubleshooting headsets
for this new 'clip' media would be so prohibitive. It would sure mean LESS production
at a higher price point & therefore a very (if any) limited following compared to now.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
It's not that bad once you have it up and running. I started one several years ago in cryengine. Had a 2 million polly bodyscan from tripplegangers that i cut down to 50,000 pollys(max cryengine will import) and re-rigged it to avoid some of the folding problems at the joints. Grabbed some ballet dancing mocap from a stanford repositry i think and had it all up and running in cryengine in a week. Looked quite nice, better body and movement than totems effort so far but no hair. Thats from never having done any of those before so...
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
4. January 2018
Back to Everthang's question.

You could, I suppose, create a single image containing all of the backgrounds next to each other as in a film strip, then use vertex shader and change the positions of the vertices so that at any one time everything except one pair of triangles forming a rectangle were squashed up into zero thickness or were off screen and do this in such a way that only one of the original images was visible at a time. Depending on how you programmed the vetex shader you could acheive a number of simple transition effects this way including sudden changes of background and sliding in one image to to replace another. If you split each of the original images into two or four pieces then ypu cold do curtain like transitions in which a new image slides in from both sides or from each corner. As yet I have not really experimented with vertex shaders so i can't provide any example code to do any of this - maybe one day I will get round to it but something else always seems to crop up.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
You should be able to use texture3d to do that without using a vertex shader.
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
4. January 2018 (edited)
But there is currently no way to provide the source needed to use texture3d - its one of the many things on my list of possible upgrades to the scene system. You could do it in a fragment shader using texture2d, but a vertex shader should place a lower load on the GPU.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
Texture3d uses the same input you suggested, a horizontal film strip. At least thats what it looked like to me when i read up on it.
TheEmu
Mitglied seit in Jul 2012

3309 Beiträge
4. January 2018
On Shadertoy.com it seems to require a folder holding 6 individual images, one for each face of a cube, but perhaps they get concatenated into a single texture by the system.
Z22
Mitglied seit in Aug 2017

1166 Beiträge
4. January 2018
I guess that way saves a lot of space as if the texture is 128x128x128 it would result in a 128x16384 size texture and 128x128 is quite low res really. I guess we cant use 6 textures anyway but maybe you could get away with 3 if you wanted to do some volumetric fog.



EverthangForever
Mitglied seit in Oct 2009

2459 Beiträge
5. January 2018 (edited)
@Guys, ok, I tried to use @Z22's pseudocode as a frag shader (below)
to alternate the above two textures as timed backgrounds. But need to see the right syntax
as it is not working for the.scn (also below). Can someone show me what to change
to make it alternate the background every 30 seconds or so please ? Thanks lots :-)
for SwapBg.fsh
uniform sampler2D texture0;
uniform sampler2D texture1;
uniform float u_Elapsed;

//varying vec4 gl_TexCoord[];
//vec4 vTexCoord = gl_TexCoord[0];

void main(void)

if(( sin(u_Elapsed*01) ) > 0.5)
{
output = texture0;
}
else
{
output = texture1;
}

gl_FragColor = output;
and for the .scn
clip {
id: Dancer
deny: top, table
}
texture {
id: space0
source: Backgrounds/redbg.jpg
}
texture {
id: space1
source: Backgrounds/bluebg.jpg
}
camera {
type: 2D
pos: 0, 0
size: 800, 450

sprite {
pos: 380, 550
hotspot: 0.5, 1.0
size: 920, 655
source: space0, 0
source: space1, 1
shader: fragment, SwapBg.fsh
}
clipSprite {
pos: 400, 450, 1
standingHeight: 400
source: Dancer
scale: -1, 1, 1
}
}

Noch keine Teilnahmeberechtigung

Als ein Gratisnutzer von iStripper bist du nicht berechtigt Beiträge zu schreiben oder neue Topics zu starten.
Aber du hast Zugriff auf die grundlegenden Bereiche und kannst unsere Community kennen lernen