Quantcast
Channel: SCN: Message List
Viewing all 8667 articles
Browse latest View live

Re: Content Conversion in receiver FTP adapter - issue with comma (,) within the data

0
0

Rashid,

 

The Adapter is converting the data into correct format only but when u open the csv file in excel by default the excel has property of opening csv file with  dlimiter as , (comma) that is the reason why u see the data is getting  shifted to  next cell.

 

But alternative  u can have a different delimiter other then comma or the comma in data has to be replaced with some other characteristic, or you can have a delimiter as comma with double quotes (",")

 

Br,

Manoj


Installing a copy of BI Platform for testing

0
0

Hi all,

 

We are trygin to create a duplicate copy of our production system (BI Platform 4.1 + SP06) on a target server (different host name / IP, no BI Platform installed).

 

I've followed chapter 14, steps 1-3, of the Administrator Guide, but even though the setup of BI Platform reports installation on the target as successfull, I've a lot of errors in the event viewer, for example:

 

"Could not open file C:\Program Files (x86)\SAP BusinessOBjects\SAP BUsiness OBjects Entreprsie XI 4.0\FileStore\Input\a_160\015\000\4000\lcm.cms.listguid[...].properties".

 

At this time of the process, the FRS has not been restored on the target, this is the next step of the admin guide, step 6 (NB: the files exist in the backup of the source FRS).

 

Since we followed exactly the admin guide how can we avoid these errors ?

Are they critical to the new installation (a repair / new installation should be done) or can be omitted ?

 

Thanks.

CAL how to active an instance again - after the trial period?

0
0

Hi,

 

we cannot see our instance any longer and do not have an idea how to extend the
license for another 90 days.

We are using a developer CAL account for Netweaver 750 on HANA.

We are surprised that the CAL instance has gone - we expected to have the ability

to use slicenes for extending our trial period.

 

thanks in advance, best regards
Frank

Re: SAP HANA SPS 12: SHINE install

0
0

Which SHINE version did you download?  There are separate XSC and XSA versions available for SPS 12. What is the name of the file HCODEMOCONTENT* or XSACSHINE*?  For HCODEMOCONTENT - that is the XSC version. Just unzip it and you will see a .tgz Delivery Unit which can be imported via the HANA Studio or HALM. If the file is named XSACSHINE* then its the XSA version. Use the XS command line tool and the command xs install <filename> to install it.

Error when assigning SID: Action VAL_SID_CONVERT, InfoObject 0POSTxt

0
0

Hi Experts,

 

We are having an error while activating an DSO 0FIGL_O14.

Below is the error.

 

Error when assigning SID: Action VAL_SID_CONVERT, InfoObject 0POSTxt.

 

Please find below screen shot

0FIGL_o14New.jpg

 

While DTP all request are green, problem is while activating the DSO.

 

I am not reporting at DSO, shall i remove SID generation during activation.

 

But, again i suppose problem will arise when data loading in Cube.

 

Also, due to invalid characters in PSA there are errors in few records, but i am unable to locate those records, in PSA

Is there any way i can locate these records any remove erroneous records in PSA

 

0FIGL_O14_2.jpg

 

Any help would be appreciated.

 

Thank you,

Sunil

Re: Workflow offline Approval / Inbound mail setting

0
0

Hi Venkadesh,

 

Hopefully you have received your answer by now. If you have gotten it working, please update the discussion with the solution and mark your question answered.

 

If you haven't gotten it yet and are still interested - this is something your Basis team needs to set up. They just need to tell you what the address is, then you need to make the settings in SO50, and write the code for the exit to process the inbound email.

 

I'm only here because we haven't used this feature in a long time and I am about to propose it on a newer SAP system, and wanted to make sure no one was pointing to a newer solution to processing inbound emails. Glad to see it is still in use.

 

~ Margaret

Re: SAP HANA SPS 12: SHINE install

0
0

Hi Thomas,

 

thanks for your fast response.

Yes, I'll install the XSA Shine content. (XSACSHINE01_12-70001276.zip) Sorry, for the little information above.

 

And now I know why I was confused by the description of the installation guide, because it was the wrong one, it was the XSC Guide.

 

Thanks, now I'll try it!

Marketing attributes and BP

0
0

Hello all,

 

could i request for your help for a best practice on custom requirement pls..

 

my client wants to be able to create marketing attributes in SAP CRM and he wants them to replicate to ECC. and same from ECC.

I know that we may need to create a view/table in ECC and get them replicated to CRM and vice versa.

 

From MW point of view, what are the enhancements we need to do to replicate the data 1) from CRM to ECC and 2) from ECC to CRM pls.

 

Thanks,

Marry.


Re: Planning another DEV instance on same windows server 2008

0
0

Hello QJ,

 

Just follow the SAP Installation Guides and you should be fine .

 

Regards,

Isaías

How to Split XML and Schema in External Definition SAP PI ESR?

0
0

Hello Friend,

 

I need your helps,

 

I have a problem with file XML download, I attached a XML FILE PlantaEnvasadoraGLP.xml, when I load in External definition SAP PI - ESR, the format has schema error. I believe this could be solve by splitting the file XML and xsd in External Definition of SAP PI - ESR.

 

Does anybody Know How to Split XML File and Schemas and load them in SAP PI External Definition?

 

I attached a XML FILE PlantaEnvasadoraGLP.xml

 

thanks

 

Kind Regards

Luis Minaya

SAP Solution Manager Central Correction Note Issue

0
0

Hi All,

 

Recently I came up with a scenario below.

 

In our Development and Production Solman System both we got a prompt to implement the version 17 (latest)  of SAP Central Correction Note.

It has been done successfully without any issues in the development system.

 

Now we tried to implement the note to Production Solution Manager System through the transport request which got created as a result of applying the note there in the Development Solman system.

 

However during the import I got Transport Error regarding some program related syntax error. It's for one program only.

 

Also after that from the Solman_Setup screen  when I am trying to navigate to System Preparation tab, I am getting Page cannot be displayed and rabax state error.

 

Checked the market place and got a note which is suggesting to remove a BADI enhancement after which the navigation issue should be resolved, same has been suggested by the SAP in the OSS message as well.

 

Now my question is since it's happening all in the production system, what is the best approach I can take to make it green again, as we don't have any previous snapshot of the system to restore.

 

 

Thanks and Regards

Anurag

Consulta Carga masiva y modificaciones MASIVAS de puntos de medición en PM

0
0

Saludos.

Estoy trabajando en el área de mantención de una empresa. Hasta el momento los puntos de medición (madre e hijo según la jerga de ellos), han sido creados manualmente con la IK01 y modificados por la IK02.

Averiguando con el soporte SAP de la empresa por el tema de la posibilidad de carga masiva de puntos hijos asociados a ptos madres creados, me han enviado una planilla de carga.

La he estado estudiando y creo que me faltara una columna, lo cual debo seguir analizando con el soporte.

A lo que no me han dado respuesta es que si con esa misma planilla (basada en la IK01), podría "pisar" la tabla modificando la asignación del punto madre (tvalmed) de un listad de puntos de medida hijos?

Les agradecería mucho la ayuda

Re: Trouble using tt:switch / tt:switch-var in simple transformation

0
0

The closest transformation I could get for matching yours, is the following:

<?sap.transform simple?>
<tt:transform xmlns:tt="http://www.sap.com/transformation-templates">

<tt:root name="DATA"/>
<tt:variable name="V_PARAMNAME" type="C" length="10"/>

<tt:template>
    <PARAMS>
        <tt:loop ref="DATA">
            <PARAM>
                <tt:assign to-var="V_PARAMNAME" val="' '"/>
                <tt:attribute name="name">
                    <tt:read var="V_PARAMNAME"/>
                </tt:attribute>
                <tt:switch-var>
                    <tt:cond-var check="var(V_PARAMNAME)='Name'
                                                 or var(V_PARAMNAME)='Status'">
                        <tt:attribute name="name" value-ref="PARAMNAME"/>
                        <tt:value ref="PARAMVALUE"/>
                    </tt:cond-var>
                    <tt:cond-var>
                        <tt:skip/>
                    </tt:cond-var>
                </tt:switch-var>
            </PARAM>
        </tt:loop>
    </PARAMS>
</tt:template>

</tt:transform>

With this transformation, you still get one line in the internal table, but all its components are initial.

Note that your "<tt:attribute name="name" ref="var(PARAMNAME)"/>" always produces an exception as "var(PARAMNAME)" (that you can see while debugging step-by-step the transformation) is not a data node.

 

I came to the conclusion that it's impossible to skip a line in an internal table during deserialization.

 

In your case, maybe the best solution is to define one structure for Name and one for Status instead of an internal table.

Re: Issues with SNC certificates as server identifies incorrect SECUDIR path.

0
0

Hello Devendra,

 

Confirm that there are conflicting SETENV parameters.

For example, if you have created the parameter SETENV_03 at the beginning of the profile, but the same parameter is defined below your new definition, the second occurrence of SETENV_03 will overwrite the first one.

 

In other words, the SETENV_XX parameters must start at zero (SETENV_00), and be sequential (but where they exist in the profile - beginning or end, not in order - does not matter).

 

If the above was not the case, have you stopped the sapstartsrv process, besides stopping SAP?

This process is not stopped by the "stopsap" command.

 

After stopping SAP (e.g., with "stopsap"), you can execute the command

 

   sapcontrol -nr XX -function StopService

 

To stop the sapstartsrv, or

 

   sapcontrol -nr XX -function RestartService

 

In order to restart it.

 

(XX is the instance number)

 

If the issue persisted, please attach the complete profile to this thread.

 

Regards,

Isaías

Re: ERROR_1071(YES): YES cannot be interpreted as a number (termination: RABAX_STATE)

0
0

Hi Plaban,

 

1071(Enable risk analysis on form submission) and 1023(Default Report type for Risk Analysis) work together.

Could you check if parameter 1023 is maintained with the required value(s) for 1071 to work correctly when set to YES or ASYNCH

 

 

Regards,

Manju


Re: Problem with "sap.m"

0
0

Hi,

 

I doubt it because if asynchronous loading.you are trying to access sap.m.Text before it loads completely.  Remove data-sap-ui-preload="async" and try.

 

Regards,

Viswa

Re: Mass role creation and addition of tcodes to role menu

0
0

hi sabyasachi rudra,

 

        could you please tell me how you solved the issue even i do have the same requirement and same problem.

 

 

 

 

Thanks,

sravanthi

Re: MobiLink is deleting my data

0
0

Interesting problem, and I’m surprised that it hasn’t come up before anytime in the last 15 years.

 

We’re only concerned about the issue if the case sensitivity of the remote database and consolidated database are the same.  If they are different and something odd is happening, having a different case sensitivity between the consolidated and remote was a poor design decision, and odd things happening is not unexpected.  Furthermore, if both remote and consolidated are case sensitive, there’s no problem, since there isn’t a primary key violation during the upload phase.  The only issue is when the remote and consolidated are both case incentive (your situation), so we’re going to assume that to be true going forward.

 

The situation you’re running into is unique, since you’ve also coded a handle_error procedure that is ignoring primary key violations.  If that had not been coded, the upload would have failed, and the remote database would have had to ‘fix’ the issue (likely deleting the newly inserted row that had a different case).  We’re not overly happy about the behavior without the custom handle_error procedure either, although it does prevent loss of data.

 

We weren’t initially sure whether this behavior was something we’d done on purpose or not because of how [insert random RDBMS here that we support] reacted to the situation.  We now don’t think it’s something we did on purpose, but a change in case of the primary key on a case insensitive database is definitely something we did not consider during the initial implementation of the code that coalesces operations on the same row when scanning the transaction log.  That lead us to thing about how we could address the issue with a code change.  Our initial though was to send the two operations as an update, but the Mobilink stream is not setup to send different values (differing only in case or otherwise) for the primary key in the post and pre images of the row.  There are implications with conflict resolution and filtering of rows in the download stream that would need to be considered, in addition to the fact that the upload_update synchronization script would somehow need to change the primary key as well.  This is a scary, non-trivial change that will change a majority of the coalescing code at the remote side, likely require a change to the MobiLink stream so we can tell MobiLink whether the remote is case sensitive or not, and some major changes to how conflict resolution and row filtering occurs in the MobiLink Server.   A simpler fix could involve recognizing the delete/insert with a different case as an update of the “same” primary key value, but then the update sent to the consolidated would include the changes to non-pkey columns, but not the primary key.  Also not a great solution.

 

We started to consider solutions that don’t involve a code change, and we’re not super excited about any of them, but we think they’re better than the code change.

  1. At the remote side, if you can guarantee that the delete of ‘X’ and the insert of ‘x’ take place in separate transactions, you could tell dbmlsync to use transactional uploads (-tu), so that a commit will also take place in the consolidated database between the insert and the delete.  We’re really trying to ensure that instead of DELETE – INSERT – SYNCH that you make sure the order is DELETE – SYNCH – INSERT, which will solve the issue.
  2. At the remote side, if you can’t guarantee that the delete of ‘X’ and the insert of ‘x’ take place in separate transactions, you could keep a shadow table on the remote side that tracks which pkey values for the table have been deleted, and to not allow a different mixed case primary key to be re-inserted until there is a successful synchronization.  Sample code :
    create table Admin (  admin_id varchar(64) primary key,  data     varchar(64)
    );
    
    create table admin_delete_shadow (
      pkey bigint default autoincrement primary key,  admin_id varchar(64)
    );
    
    create trigger ad_admin after delete on Admin 
    referencing old as pr for each row
    begin
      insert into admin_delete_shadow(admin_id) values (pr.admin_id);
    end;
    
    create trigger bi_admin before insert on Admin
    referencing new as nr for each row
    begin
      declare @deleted_hash varchar(40);  if exists ( select 1 from admin_delete_shadow where admin_id = nr.admin_id ) then    select hash(admin_id, 'SHA1') into @deleted_hash from admin_delete_shadow where admin_id = nr.admin_id;    if ( @deleted_hash != hash( nr.admin_id, 'SHA1' ) ) then      raiserror 28033 'Insert of primary key ''%1!'' on table Admin cannot take place until a synchronization occurs.' , nr.admin_id;    end if;  end if;
    end;
    
    create procedure sp_hook_dbmlsync_end()
    begin
      if exists ( select 1 from #hook_dict where name = 'exit code' and value = 0 ) then    delete from admin_delete_shadow;  end if;
    end;
  3. Combine ideas (1) and (2), and track inserts and deletes on the tables in questions between synchronization, and only do transactional uploads (which are typically more expensive) if you determine that the case of a primary key has changed between synchronizations.  Unfortunately, whether an upload is transactional or not is not an extended option, so you can’t change it in the sp_hook_dbmlsync_set_extended_options stord procedure, which would have been super convenient.  You need to actually change the way you invoke dblmsync based on rows in the shadow table on tables in question.
  4. Something slightly more complicated, but this time at a server side.  You’d need only modify the synchronization scripts in the consolidated database, and not the schema at every remote database.  Make use of the handle_upload event, where you can access to the contents of the upload stream.  In this event, check the rows that are being sent as inserts and the rows that are being sent as deletes and see whether you are running into the issue.  If you are then “do the correct thing” with the row in question in the handle_upload event, and also track the table_name and primary key value of the row you handled in a temporary table.  You’ll now need to modify your upload_insert and upload_delete scripts to be stored procedures that will skip rows already handled in the handle_upload event.

 

Reg

Re: Field Net due date (FAEDT) in AR drilldown reports RFRRD20

0
0

Hi Victor,

 

I also have similar issue. I add KATR6 and it shows in selection screen and in Report but does not fetching value. If you already resolved this issue, can you please help me?

 

 

Advance Thanks,

 

Finiatha

Re: Error: Flavor-version 303 is newer than the system version 103

0
0

Hi Rajen,

 

OSS Note# 2326641 was released this morning. Applying this note corrected this issue for our system.

 

Regards

Chris

Viewing all 8667 articles
Browse latest View live




Latest Images