将立方体贴图转换为等距柱状全景图
我想从立方体贴图 [图 1] 转换为等距柱状图 [图 2].
图1
图2
可以从球形变为立方形(通过以下操作:
目标是将图像投影到等距柱状图格式,如下所示:
转换算法相当简单.为了在给定具有 6 个面的立方体贴图的情况下计算等距柱状图图像中每个像素的最佳估计颜色:
- 首先计算每个像素对应的极坐标球面图像.
- 其次,使用极坐标形成一个向量并确定立方体贴图的哪个面和哪个像素面向向量谎言;就像从立方体中心发出的光线会击中其中一个它的侧面和该侧面的特定点.
请记住,给定立方贴图特定面上的归一化坐标 (u,v),有多种方法可以估计等距柱状图图像中像素的颜色.最基本的方法是一种非常原始的近似值,为简单起见将在本答案中使用,是将坐标四舍五入到特定像素并使用该像素.其他更高级的方法可以计算几个相邻像素的平均值.
算法的实现将因上下文而异.我在 Unity3D C# 中做了一个快速实现,展示了如何在真实世界场景中实现算法.它在 CPU 上运行,有很大的改进空间但很容易理解.
使用UnityEngine;公共静态类 CubemapConverter{public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight){Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false);浮动 u, v;//归一化纹理坐标,从0到1,从左下角开始浮动 p theta;//极坐标int cubeFaceWidth,cubeFaceHeight;cubeFaceWidth = sourceTexture.width/4;//4个水平面cubeFaceHeight = sourceTexture.height/3;//3个垂直面for (int j = 0; j
为了利用 GPU,我创建了一个执行相同转换的着色器.它比在 CPU 上逐个像素地运行转换要快得多,但不幸的是 Unity 对立方体贴图施加了分辨率限制,因此在使用高分辨率输入图像的场景中它的用处受到限制.
Shader "Conversion/CubemapToEquirectangular" {特性 {_MainTex("Cubemap (RGB)", CUBE) = "" {}}子着色器{经过 {ZTest 总是剔除 ZWrite Off雾 { 模式关闭 }CG程序#pragma 顶点顶点#pragma 片段 frag#pragma fragmentoption ARB_precision_hint_fastest//#pragma fragmentoption ARB_precision_hint_nicest#include "UnityCG.cginc"#define PI 3.141592653589793#define TWOPI 6.283185307179587结构 v2f {float4 位置:位置;float2 uv : TEXCOORD0;};samplerCUBE _MainTex;v2f vert( appdata_img v ){v2f o;o.pos = mul(UNITY_MATRIX_MVP, v.vertex);o.uv = v.texcoord.xy * float2(TWOPI, PI);返回o;}fixed4 frag(v2f i) : 颜色{浮动 theta = i.uv.y;浮动 phi = i.uv.x;float3 单位 = float3(0,0,0);unit.x = sin(phi) * sin(theta) * -1;unit.y = cos(theta) * -1;unit.z = cos(phi) * sin(theta) * -1;返回 texCUBE(_MainTex, unit);}ENDCG}}回退关闭}
通过在转换过程中采用更复杂的方法来估计像素的颜色或对结果图像进行后处理(或实际上两者兼而有之),可以大大提高结果图像的质量.例如,可以生成更大尺寸的图像以应用模糊过滤器,然后将其下采样到所需的尺寸.
我使用两个编辑器向导创建了一个简单的 Unity 项目,这些向导展示了如何正确利用 C# 代码或上面显示的着色器.在这里获取:https://github.com/Mapiarz/CubemapToEquirectangular
记住在 Unity 中为输入图像设置正确的导入设置:
- 点过滤
- 真彩色格式
- 禁用 mipmap
- 非 2 的幂:无(仅适用于 2DTextures)
- 启用读/写(仅适用于 2DTextures)
I want to convert from cube map [figure1] into an equirectangular panorama [figure2].
Figure1
Figure2
It is possible to go from Spherical to Cubic (by following: Convert 2:1 equirectangular panorama to cube map ), but lost on how to reverse it.
Figure2 is to be rendered into a sphere using Unity.
解决方案Assuming the input image is in the following cubemap format:
The goal is to project the image to the equirectangular format like so:
The conversion algorithm is rather straightforward. In order to calculate the best estimate of the color at each pixel in the equirectangular image given a cubemap with 6 faces:
- Firstly, calculate polar coordinates that correspond to each pixel in the spherical image.
- Secondly, using the polar coordinates form a vector and determine on which face of the cubemap and which pixel of that face the vector lies; just like a raycast from the center of a cube would hit one of its sides and a specific point on that side.
Keep in mind that there are multiple methods to estimate the color of a pixel in the equirectangular image given a normalized coordinate (u,v) on a specific face of a cubemap. The most basic method, which is a very raw approximation and will be used in this answer for simplicity's sake, is to round the coordinates to a specific pixel and use that pixel. Other more advanced methods could calculate an average of a few neighbouring pixels.
The implementation of the algorithm will vary depending on the context. I did a quick implementation in Unity3D C# that shows how to implement the algorithm in a real world scenario. It runs on the CPU, there is a lot room for improvement but it is easy to understand.
using UnityEngine;
public static class CubemapConverter
{
public static byte[] ConvertToEquirectangular(Texture2D sourceTexture, int outputWidth, int outputHeight)
{
Texture2D equiTexture = new Texture2D(outputWidth, outputHeight, TextureFormat.ARGB32, false);
float u, v; //Normalised texture coordinates, from 0 to 1, starting at lower left corner
float phi, theta; //Polar coordinates
int cubeFaceWidth, cubeFaceHeight;
cubeFaceWidth = sourceTexture.width / 4; //4 horizontal faces
cubeFaceHeight = sourceTexture.height / 3; //3 vertical faces
for (int j = 0; j < equiTexture.height; j++)
{
//Rows start from the bottom
v = 1 - ((float)j / equiTexture.height);
theta = v * Mathf.PI;
for (int i = 0; i < equiTexture.width; i++)
{
//Columns start from the left
u = ((float)i / equiTexture.width);
phi = u * 2 * Mathf.PI;
float x, y, z; //Unit vector
x = Mathf.Sin(phi) * Mathf.Sin(theta) * -1;
y = Mathf.Cos(theta);
z = Mathf.Cos(phi) * Mathf.Sin(theta) * -1;
float xa, ya, za;
float a;
a = Mathf.Max(new float[3] { Mathf.Abs(x), Mathf.Abs(y), Mathf.Abs(z) });
//Vector Parallel to the unit vector that lies on one of the cube faces
xa = x / a;
ya = y / a;
za = z / a;
Color color;
int xPixel, yPixel;
int xOffset, yOffset;
if (xa == 1)
{
//Right
xPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceWidth);
xOffset = 2 * cubeFaceWidth; //Offset
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight; //Offset
}
else if (xa == -1)
{
//Left
xPixel = (int)((((za + 1f) / 2f)) * cubeFaceWidth);
xOffset = 0;
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else if (ya == 1)
{
//Up
xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel = (int)((((za + 1f) / 2f) - 1f) * cubeFaceHeight);
yOffset = 2 * cubeFaceHeight;
}
else if (ya == -1)
{
//Down
xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel = (int)((((za + 1f) / 2f)) * cubeFaceHeight);
yOffset = 0;
}
else if (za == 1)
{
//Front
xPixel = (int)((((xa + 1f) / 2f)) * cubeFaceWidth);
xOffset = cubeFaceWidth;
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else if (za == -1)
{
//Back
xPixel = (int)((((xa + 1f) / 2f) - 1f) * cubeFaceWidth);
xOffset = 3 * cubeFaceWidth;
yPixel = (int)((((ya + 1f) / 2f)) * cubeFaceHeight);
yOffset = cubeFaceHeight;
}
else
{
Debug.LogWarning("Unknown face, something went wrong");
xPixel = 0;
yPixel = 0;
xOffset = 0;
yOffset = 0;
}
xPixel = Mathf.Abs(xPixel);
yPixel = Mathf.Abs(yPixel);
xPixel += xOffset;
yPixel += yOffset;
color = sourceTexture.GetPixel(xPixel, yPixel);
equiTexture.SetPixel(i, j, color);
}
}
equiTexture.Apply();
var bytes = equiTexture.EncodeToPNG();
Object.DestroyImmediate(equiTexture);
return bytes;
}
}
In order to utilize the GPU I created a shader that does the same conversion. It is much faster than running the conversion pixel by pixel on the CPU but unfortunately Unity imposes resolution limitations on cubemaps so it's usefulness is limited in scenarios when high resolution input image is to be used.
Shader "Conversion/CubemapToEquirectangular" {
Properties {
_MainTex ("Cubemap (RGB)", CUBE) = "" {}
}
Subshader {
Pass {
ZTest Always Cull Off ZWrite Off
Fog { Mode off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
//#pragma fragmentoption ARB_precision_hint_nicest
#include "UnityCG.cginc"
#define PI 3.141592653589793
#define TWOPI 6.283185307179587
struct v2f {
float4 pos : POSITION;
float2 uv : TEXCOORD0;
};
samplerCUBE _MainTex;
v2f vert( appdata_img v )
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy * float2(TWOPI, PI);
return o;
}
fixed4 frag(v2f i) : COLOR
{
float theta = i.uv.y;
float phi = i.uv.x;
float3 unit = float3(0,0,0);
unit.x = sin(phi) * sin(theta) * -1;
unit.y = cos(theta) * -1;
unit.z = cos(phi) * sin(theta) * -1;
return texCUBE(_MainTex, unit);
}
ENDCG
}
}
Fallback Off
}
The quality of the resulting images can be greatly improved by either employing a more sophisticated method to estimate the color of a pixel during the conversion or by post processing the resulting image (or both, actually). For example an image of bigger size could be generated to apply a blur filter and then downsample it to the desired size.
I created a simple Unity project with two editor wizards that show how to properly utilize either the C# code or the shader shown above. Get it here: https://github.com/Mapiarz/CubemapToEquirectangular
Remember to set proper import settings in Unity for your input images:
- Point filtering
- Truecolor format
- Disable mipmaps
- Non Power of 2: None (only for 2DTextures)
- Enable Read/Write (only for 2DTextures)
相关文章